From patchwork Thu Sep 24 08:25:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 11796557 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 93EC86CB for ; Thu, 24 Sep 2020 08:25:48 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3ABF42376F for ; Thu, 24 Sep 2020 08:25:48 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3ABF42376F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6264D90000D; Thu, 24 Sep 2020 04:25:47 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 5D72590000C; Thu, 24 Sep 2020 04:25:47 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4EB2B90000D; Thu, 24 Sep 2020 04:25:47 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0038.hostedemail.com [216.40.44.38]) by kanga.kvack.org (Postfix) with ESMTP id 38A4E90000C for ; Thu, 24 Sep 2020 04:25:47 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id EEDE2181AE866 for ; Thu, 24 Sep 2020 08:25:46 +0000 (UTC) X-FDA: 77297271492.24.blade00_6314bf72715d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin24.hostedemail.com (Postfix) with ESMTP id D1AD01A4A0 for ; Thu, 24 Sep 2020 08:25:46 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,ying.huang@intel.com,,RULES_HIT:30054:30064,0,RBL:134.134.136.126:@intel.com:.lbl8.mailshell.net-64.95.201.95 62.18.0.100;04yfaj6m8rjdbzk31ia4ggchqp574och5ry73d417qk14trnsi65xrzqpc99g5y.apfmz9hd4x5mcsx7u35qw1apjne8hqrwkgoibimb6ksernzfkdxmrjd8kb7hqcp.h-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: blade00_6314bf72715d X-Filterd-Recvd-Size: 4728 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by imf10.hostedemail.com (Postfix) with ESMTP for ; Thu, 24 Sep 2020 08:25:45 +0000 (UTC) IronPort-SDR: XR0OvcEY6N5bmjT7TP2lrOcqvItDPZ6bkMwXsRpUoDoG3qsAKj5tWLNv5Cu6YS9J2VHRRgGZFd v2dS2NxnL/yw== X-IronPort-AV: E=McAfee;i="6000,8403,9753"; a="148887061" X-IronPort-AV: E=Sophos;i="5.77,296,1596524400"; d="scan'208";a="148887061" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Sep 2020 01:25:43 -0700 IronPort-SDR: 1M5cKxWgpKIesgxi6B8FRVtDBeXkqhjvq3aktbkfZY8yTv1kDvlYMNwwIN6wPw9wB6N+8fwl0l VmSIq9aKXXfg== X-IronPort-AV: E=Sophos;i="5.77,296,1596524400"; d="scan'208";a="486812105" Received: from yhuang-mobile.sh.intel.com ([10.238.4.22]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Sep 2020 01:25:39 -0700 From: Huang Ying To: Peter Zijlstra Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , "Matthew Wilcox (Oracle)" , Andrew Morton , Ingo Molnar , Mel Gorman , Rik van Riel , Johannes Weiner , Dave Hansen , Andi Kleen , Michal Hocko , David Rientjes Subject: [PATCH 1/2] mempolicy: Rename MPOL_F_MORON to MPOL_F_MOPRON Date: Thu, 24 Sep 2020 16:25:08 +0800 Message-Id: <20200924082509.445336-1-ying.huang@intel.com> X-Mailer: git-send-email 2.28.0 MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: To follow code-of-conduct better. Signed-off-by: "Huang, Ying" Suggested-by: "Matthew Wilcox (Oracle)" Cc: Andrew Morton Cc: Ingo Molnar Cc: Mel Gorman Cc: Rik van Riel Cc: Johannes Weiner Cc: Dave Hansen Cc: Andi Kleen Cc: Michal Hocko Cc: David Rientjes Acked-by: Rafael Aquini --- include/uapi/linux/mempolicy.h | 2 +- kernel/sched/debug.c | 2 +- mm/mempolicy.c | 6 +++--- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/include/uapi/linux/mempolicy.h b/include/uapi/linux/mempolicy.h index 3354774af61e..3c3666d017e6 100644 --- a/include/uapi/linux/mempolicy.h +++ b/include/uapi/linux/mempolicy.h @@ -60,7 +60,7 @@ enum { #define MPOL_F_SHARED (1 << 0) /* identify shared policies */ #define MPOL_F_LOCAL (1 << 1) /* preferred local allocation */ #define MPOL_F_MOF (1 << 3) /* this policy wants migrate on fault */ -#define MPOL_F_MORON (1 << 4) /* Migrate On protnone Reference On Node */ +#define MPOL_F_MOPRON (1 << 4) /* Migrate On Protnone Reference On Node */ #endif /* _UAPI_LINUX_MEMPOLICY_H */ diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c index 36c54265bb2b..26495a344d8d 100644 --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -844,7 +844,7 @@ static void sched_show_numa(struct task_struct *p, struct seq_file *m) task_lock(p); pol = p->mempolicy; - if (pol && !(pol->flags & MPOL_F_MORON)) + if (pol && !(pol->flags & MPOL_F_MOPRON)) pol = NULL; mpol_get(pol); task_unlock(p); diff --git a/mm/mempolicy.c b/mm/mempolicy.c index eddbe4e56c73..62cd159aa46d 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2515,7 +2515,7 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long } /* Migrate the page towards the node whose CPU is referencing it */ - if (pol->flags & MPOL_F_MORON) { + if (pol->flags & MPOL_F_MOPRON) { polnid = thisnid; if (!should_numa_migrate_memory(current, page, curnid, thiscpu)) @@ -2806,7 +2806,7 @@ void __init numa_policy_init(void) preferred_node_policy[nid] = (struct mempolicy) { .refcnt = ATOMIC_INIT(1), .mode = MPOL_PREFERRED, - .flags = MPOL_F_MOF | MPOL_F_MORON, + .flags = MPOL_F_MOF | MPOL_F_MOPRON, .v = { .preferred_node = nid, }, }; } @@ -3014,7 +3014,7 @@ void mpol_to_str(char *buffer, int maxlen, struct mempolicy *pol) unsigned short mode = MPOL_DEFAULT; unsigned short flags = 0; - if (pol && pol != &default_policy && !(pol->flags & MPOL_F_MORON)) { + if (pol && pol != &default_policy && !(pol->flags & MPOL_F_MOPRON)) { mode = pol->mode; flags = pol->flags; } From patchwork Thu Sep 24 08:25:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 11796559 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 573026CA for ; Thu, 24 Sep 2020 08:25:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 292EF208B8 for ; Thu, 24 Sep 2020 08:25:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 292EF208B8 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2F1B890000E; Thu, 24 Sep 2020 04:25:50 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2A26790000C; Thu, 24 Sep 2020 04:25:50 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1DFA790000E; Thu, 24 Sep 2020 04:25:50 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0001.hostedemail.com [216.40.44.1]) by kanga.kvack.org (Postfix) with ESMTP id 09E8390000C for ; Thu, 24 Sep 2020 04:25:50 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id C176F8249980 for ; Thu, 24 Sep 2020 08:25:49 +0000 (UTC) X-FDA: 77297271618.14.spy44_03022a42715d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin14.hostedemail.com (Postfix) with ESMTP id A214318229818 for ; Thu, 24 Sep 2020 08:25:49 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,ying.huang@intel.com,,RULES_HIT:30054:30064,0,RBL:134.134.136.126:@intel.com:.lbl8.mailshell.net-62.18.0.100 64.95.201.95;04yfxpc68tmxmpdnfi8rw8omaadptopbhu99xmn9mamhi9excc3xqhsm9am5iuk.awf8uyoyh9z615xcixqsfhkztsr3uzecex5gpydx9ayfy3fnh78f4qhqyck6c46.q-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: spy44_03022a42715d X-Filterd-Recvd-Size: 5282 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by imf10.hostedemail.com (Postfix) with ESMTP for ; Thu, 24 Sep 2020 08:25:48 +0000 (UTC) IronPort-SDR: +gi9kh1DlDScE57boD8MKp2xH++jHLbgU76UOogIzJ/UFpQ8zZYlj8L88SZTuA+INJhyI9XdXd gK/qd3QudhEA== X-IronPort-AV: E=McAfee;i="6000,8403,9753"; a="148887091" X-IronPort-AV: E=Sophos;i="5.77,296,1596524400"; d="scan'208";a="148887091" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Sep 2020 01:25:48 -0700 IronPort-SDR: LDhIrBWR8ke5+Wr267OXdcKiI3pTv1EVGBwoKfMTnDU6HRlBRuOk3sT+IRaRcg7fyPOCqBKV3p Iul6CeLA9HvA== X-IronPort-AV: E=Sophos;i="5.77,296,1596524400"; d="scan'208";a="486812118" Received: from yhuang-mobile.sh.intel.com ([10.238.4.22]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Sep 2020 01:25:43 -0700 From: Huang Ying To: Peter Zijlstra Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , Andrew Morton , Ingo Molnar , Mel Gorman , Rik van Riel , Johannes Weiner , "Matthew Wilcox (Oracle)" , Dave Hansen , Andi Kleen , Michal Hocko , David Rientjes Subject: [PATCH 2/2] autonuma: Migrate on fault among multiple bound nodes Date: Thu, 24 Sep 2020 16:25:09 +0800 Message-Id: <20200924082509.445336-2-ying.huang@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200924082509.445336-1-ying.huang@intel.com> References: <20200924082509.445336-1-ying.huang@intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now, AutoNUMA can only optimize the page placement among the NUMA nodes if the default memory policy is used. Because the memory policy specified explicitly should take precedence. But this seems too strict in some situations. For example, on a system with 4 NUMA nodes, if the memory of an application is bound to the node 0 and 1, AutoNUMA can potentially migrate the pages between the node 0 and 1 to reduce cross-node accessing without breaking the explicit memory binding policy. So in this patch, if mbind(.mode=MPOL_BIND, .flags=MPOL_MF_LAZY) is used to bind the memory of the application to multiple nodes, and in the hint page fault handler both the faulting page node and the accessing node are in the policy nodemask, the page will be tried to be migrated to the accessing node to reduce the cross-node accessing. [Peter Zijlstra: provided the simplified implementation method.] Questions: Sysctl knob kernel.numa_balancing can enable/disable AutoNUMA optimizing globally. But for the memory areas that are bound to multiple NUMA nodes, even if the AutoNUMA is enabled globally via the sysctl knob, we still need to enable AutoNUMA again with a special flag. Why not just optimize the page placement if possible as long as AutoNUMA is enabled globally? The interface would look simpler with that. Signed-off-by: "Huang, Ying" Cc: Andrew Morton Cc: Ingo Molnar Cc: Mel Gorman Cc: Rik van Riel Cc: Johannes Weiner Cc: "Matthew Wilcox (Oracle)" Cc: Dave Hansen Cc: Andi Kleen Cc: Michal Hocko Cc: David Rientjes --- mm/mempolicy.c | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 62cd159aa46d..73119ee460c6 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2494,15 +2494,19 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long break; case MPOL_BIND: - /* - * allows binding to multiple nodes. - * use current page if in policy nodemask, - * else select nearest allowed node, if any. - * If no allowed nodes, use current [!misplaced]. + * Allows binding to multiple nodes. If both current and + * accessing nodes are in policy nodemask, migrate to + * accessing node to optimize page placement. Otherwise, + * use current page if in policy nodemask, else select + * nearest allowed node, if any. If no allowed nodes, use + * current [!misplaced]. */ - if (node_isset(curnid, pol->v.nodes)) + if (node_isset(curnid, pol->v.nodes)) { + if (node_isset(thisnid, pol->v.nodes)) + goto mopron; goto out; + } z = first_zones_zonelist( node_zonelist(numa_node_id(), GFP_HIGHUSER), gfp_zone(GFP_HIGHUSER), @@ -2516,6 +2520,7 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long /* Migrate the page towards the node whose CPU is referencing it */ if (pol->flags & MPOL_F_MOPRON) { +mopron: polnid = thisnid; if (!should_numa_migrate_memory(current, page, curnid, thiscpu))