From patchwork Tue Jun 30 21:25:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 11634831 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 08F5013B4 for ; Tue, 30 Jun 2020 21:25:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D351F20826 for ; Tue, 30 Jun 2020 21:25:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D351F20826 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 577388D0009; Tue, 30 Jun 2020 17:25:30 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 3F26A8D0007; Tue, 30 Jun 2020 17:25:30 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E73388D0009; Tue, 30 Jun 2020 17:25:29 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0172.hostedemail.com [216.40.44.172]) by kanga.kvack.org (Postfix) with ESMTP id C504E8D0007 for ; Tue, 30 Jun 2020 17:25:29 -0400 (EDT) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 9229D2DFC for ; Tue, 30 Jun 2020 21:25:29 +0000 (UTC) X-FDA: 76987159578.07.leaf04_500796c26e7b Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin07.hostedemail.com (Postfix) with ESMTP id 550081803F9AF for ; Tue, 30 Jun 2020 21:25:29 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,ben.widawsky@intel.com,,RULES_HIT:30054:30064,0,RBL:192.55.52.136:@intel.com:.lbl8.mailshell.net-62.18.0.100 64.95.201.95;04ygsha3piu3gk6gtmts355oae5ewop4tkdmnncsdfj64j98k8uwdutmjq1tsmt.oe6e33u1ojdo1i6o7qg11nxiwgzx5gtnccwapbry6d58i4iycfyckhsd6ae5pxy.q-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: leaf04_500796c26e7b X-Filterd-Recvd-Size: 6165 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by imf37.hostedemail.com (Postfix) with ESMTP for ; Tue, 30 Jun 2020 21:25:28 +0000 (UTC) IronPort-SDR: VQBKb7GUVregyO+8svMDLXXVuLW2YmnypCupm/cHlAsv0abgNJ0+XggbI30oEnY8l1wRfa8udR yWWSK7By79vA== X-IronPort-AV: E=McAfee;i="6000,8403,9668"; a="126011322" X-IronPort-AV: E=Sophos;i="5.75,298,1589266800"; d="scan'208";a="126011322" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Jun 2020 14:25:25 -0700 IronPort-SDR: 4+7cRLuAWyAT8sTbObsbjZgxHy7VffO0F4uOKj3ndsLDqqlKVCgamUHtvDiFuekZHQLoLUR5I0 ZtfbCNfNp09Q== X-IronPort-AV: E=Sophos;i="5.75,298,1589266800"; d="scan'208";a="481336277" Received: from schittin-mobl.amr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.132.42]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Jun 2020 14:25:24 -0700 From: Ben Widawsky To: linux-mm , linux-kernel@vger.kernel.org Cc: Michal Hocko , Dave Hansen , Ben Widawsky , Andrew Morton , Dave Hansen Subject: [PATCH 07/12] mm/mempolicy: handle MPOL_PREFERRED_MANY like BIND Date: Tue, 30 Jun 2020 14:25:12 -0700 Message-Id: <20200630212517.308045-8-ben.widawsky@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200630212517.308045-1-ben.widawsky@intel.com> References: <20200630212517.308045-1-ben.widawsky@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 550081803F9AF X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This patch begins the real plumbing for handling this new policy. Now that the internal representation for preferred nodes and bound nodes is the same, and we can envision what multiple preferred nodes will behave like, there are obvious places where we can simply reuse the bind behavior. In v1 of this series, the moral equivalent was: "mm: Finish handling MPOL_PREFERRED_MANY". Like that, this attempts to implement the easiest spots for the new policy. Unlike that, this just reuses BIND. Cc: Andrew Morton Cc: Dave Hansen Cc: Michal Hocko Signed-off-by: Ben Widawsky --- mm/mempolicy.c | 23 ++++++++--------------- 1 file changed, 8 insertions(+), 15 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index e71ebc906ff0..3b38c9c4e580 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -950,8 +950,6 @@ static void get_policy_nodemask(struct mempolicy *p, nodemask_t *nodes) switch (p->mode) { case MPOL_BIND: case MPOL_INTERLEAVE: - *nodes = p->nodes; - break; case MPOL_PREFERRED_MANY: *nodes = p->nodes; break; @@ -1938,7 +1936,8 @@ static int apply_policy_zone(struct mempolicy *policy, enum zone_type zone) static nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *policy) { /* Lower zones don't get a nodemask applied for MPOL_BIND */ - if (unlikely(policy->mode == MPOL_BIND) && + if (unlikely(policy->mode == MPOL_BIND || + policy->mode == MPOL_PREFERRED_MANY) && apply_policy_zone(policy, gfp_zone(gfp)) && cpuset_nodemask_valid_mems_allowed(&policy->nodes)) return &policy->nodes; @@ -1995,7 +1994,6 @@ unsigned int mempolicy_slab_node(void) return node; switch (policy->mode) { - case MPOL_PREFERRED_MANY: case MPOL_PREFERRED: /* * handled MPOL_F_LOCAL above @@ -2005,6 +2003,7 @@ unsigned int mempolicy_slab_node(void) case MPOL_INTERLEAVE: return interleave_nodes(policy); + case MPOL_PREFERRED_MANY: case MPOL_BIND: { struct zoneref *z; @@ -2020,6 +2019,7 @@ unsigned int mempolicy_slab_node(void) return z->zone ? zone_to_nid(z->zone) : node; } + default: BUG(); } @@ -2130,9 +2130,6 @@ bool init_nodemask_of_mempolicy(nodemask_t *mask) task_lock(current); mempolicy = current->mempolicy; switch (mempolicy->mode) { - case MPOL_PREFERRED_MANY: - *mask = mempolicy->nodes; - break; case MPOL_PREFERRED: if (mempolicy->flags & MPOL_F_LOCAL) nid = numa_node_id(); @@ -2143,6 +2140,7 @@ bool init_nodemask_of_mempolicy(nodemask_t *mask) case MPOL_BIND: case MPOL_INTERLEAVE: + case MPOL_PREFERRED_MANY: *mask = mempolicy->nodes; break; @@ -2186,12 +2184,11 @@ bool mempolicy_nodemask_intersects(struct task_struct *tsk, * Thus, it's possible for tsk to have allocated memory from * nodes in mask. */ - break; - case MPOL_PREFERRED_MANY: ret = nodes_intersects(mempolicy->nodes, *mask); break; case MPOL_BIND: case MPOL_INTERLEAVE: + case MPOL_PREFERRED_MANY: ret = nodes_intersects(mempolicy->nodes, *mask); break; default: @@ -2415,7 +2412,6 @@ bool __mpol_equal(struct mempolicy *a, struct mempolicy *b) switch (a->mode) { case MPOL_BIND: case MPOL_INTERLEAVE: - return !!nodes_equal(a->nodes, b->nodes); case MPOL_PREFERRED_MANY: return !!nodes_equal(a->nodes, b->nodes); case MPOL_PREFERRED: @@ -2569,6 +2565,7 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long polnid = first_node(pol->nodes); break; + case MPOL_PREFERRED_MANY: case MPOL_BIND: /* @@ -2585,8 +2582,6 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long polnid = zone_to_nid(z->zone); break; - /* case MPOL_PREFERRED_MANY: */ - default: BUG(); } @@ -3099,15 +3094,13 @@ void mpol_to_str(char *buffer, int maxlen, struct mempolicy *pol) switch (mode) { case MPOL_DEFAULT: break; - case MPOL_PREFERRED_MANY: - WARN_ON(flags & MPOL_F_LOCAL); - fallthrough; case MPOL_PREFERRED: if (flags & MPOL_F_LOCAL) mode = MPOL_LOCAL; else nodes_or(nodes, nodes, pol->nodes); break; + case MPOL_PREFERRED_MANY: case MPOL_BIND: case MPOL_INTERLEAVE: nodes = pol->nodes;