From patchwork Fri Jun 19 16:24:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 11614605 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 432AD90 for ; Fri, 19 Jun 2020 16:24:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1124C2168B for ; Fri, 19 Jun 2020 16:24:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1124C2168B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A31588D00DC; Fri, 19 Jun 2020 12:24:35 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 98D238D00D8; Fri, 19 Jun 2020 12:24:35 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 744DD8D00DB; Fri, 19 Jun 2020 12:24:35 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0208.hostedemail.com [216.40.44.208]) by kanga.kvack.org (Postfix) with ESMTP id 417248D00D6 for ; Fri, 19 Jun 2020 12:24:35 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 045F5869A3 for ; Fri, 19 Jun 2020 16:24:35 +0000 (UTC) X-FDA: 76946484510.30.corn88_16160eb26e1a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin30.hostedemail.com (Postfix) with ESMTP id CD8F6180A25D7 for ; Fri, 19 Jun 2020 16:24:34 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,ben.widawsky@intel.com,,RULES_HIT:30054:30064,0,RBL:192.55.52.93:@intel.com:.lbl8.mailshell.net-62.50.0.100 64.95.201.95,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: corn88_16160eb26e1a X-Filterd-Recvd-Size: 5221 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by imf50.hostedemail.com (Postfix) with ESMTP for ; Fri, 19 Jun 2020 16:24:33 +0000 (UTC) IronPort-SDR: seroeqbL4igBN6fGUgMG1clFe6vr4JamHU+q+CIIt5IvQcg9n+ieo1hBwV9jAjM7RZXDIUNY8K 0jENCcCzC/ZQ== X-IronPort-AV: E=McAfee;i="6000,8403,9657"; a="141280159" X-IronPort-AV: E=Sophos;i="5.75,255,1589266800"; d="scan'208";a="141280159" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jun 2020 09:24:30 -0700 IronPort-SDR: FO8s2akDwN7AEB9ZPgLdIPArtd7lqjXX+oAef5sfRlD4YGZDMEKn9oaB6KaXq4tIduI4UUlTCw DuOsVJmx7SgA== X-IronPort-AV: E=Sophos;i="5.75,255,1589266800"; d="scan'208";a="264368219" Received: from sjiang-mobl2.ccr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.131.131]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jun 2020 09:24:30 -0700 From: Ben Widawsky To: linux-mm Cc: Ben Widawsky , Andrew Morton , Dan Williams , Dave Hansen , Mel Gorman Subject: [PATCH 09/18] mm: Finish handling MPOL_PREFERRED_MANY Date: Fri, 19 Jun 2020 09:24:16 -0700 Message-Id: <20200619162425.1052382-10-ben.widawsky@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200619162425.1052382-1-ben.widawsky@intel.com> References: <20200619162425.1052382-1-ben.widawsky@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: CD8F6180A25D7 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now that there is a function to generate the preferred zonelist given a preferred mask, bindmask, and flags it is possible to support MPOL_PREFERRED_MANY policy easily in more places. This patch was developed on top of Dave's original work. When Dave wrote his patches there was no clean way to implement MPOL_PREFERRED_MANY. Now that the other bits are in place, this is easy to drop on top. Cc: Andrew Morton Cc: Dan Williams Cc: Dave Hansen Cc: Mel Gorman Signed-off-by: Ben Widawsky --- include/linux/mmzone.h | 3 +++ mm/mempolicy.c | 20 ++++++++++++++++++-- mm/page_alloc.c | 5 ++--- 3 files changed, 23 insertions(+), 5 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index c4c37fd12104..6b62ee98bb96 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -1001,6 +1001,9 @@ struct zoneref *__next_zones_zonelist(struct zoneref *z, enum zone_type highest_zoneidx, nodemask_t *nodes); +struct zonelist *preferred_zonelist(gfp_t gfp_mask, const nodemask_t *prefmask, + const nodemask_t *bindmask); + /** * next_zones_zonelist - Returns the next zone at or below highest_zoneidx within the allowed nodemask using a cursor within a zonelist as a starting point * @z - The cursor used as a starting point for the search diff --git a/mm/mempolicy.c b/mm/mempolicy.c index bfc4ef2af90d..90bc9c93b1b9 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1995,7 +1995,6 @@ unsigned int mempolicy_slab_node(void) return node; switch (policy->mode) { - case MPOL_PREFERRED_MANY: case MPOL_PREFERRED: /* * handled MPOL_F_LOCAL above @@ -2020,6 +2019,18 @@ unsigned int mempolicy_slab_node(void) return z->zone ? zone_to_nid(z->zone) : node; } + case MPOL_PREFERRED_MANY: { + struct zoneref *z; + struct zonelist *zonelist; + enum zone_type highest_zoneidx = gfp_zone(GFP_KERNEL); + + zonelist = preferred_zonelist(GFP_KERNEL, + &policy->v.preferred_nodes, NULL); + z = first_zones_zonelist(zonelist, highest_zoneidx, + &policy->v.nodes); + return z->zone ? zone_to_nid(z->zone) : node; + } + default: BUG(); } @@ -2585,7 +2596,12 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long polnid = zone_to_nid(z->zone); break; - /* case MPOL_PREFERRED_MANY: */ + case MPOL_PREFERRED_MANY: + z = first_zones_zonelist(preferred_zonelist(GFP_HIGHUSER, + &pol->v.preferred_nodes, NULL), + gfp_zone(GFP_HIGHUSER), &pol->v.preferred_nodes); + polnid = zone_to_nid(z->zone); + break; default: BUG(); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 3cf44b6c31ae..c6f8f112a5d4 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4861,9 +4861,8 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, * NB: That zonelist will have *all* zones in the fallback case, and not all of * those zones will belong to preferred nodes. */ -static struct zonelist *preferred_zonelist(gfp_t gfp_mask, - const nodemask_t *prefmask, - const nodemask_t *bindmask) +struct zonelist *preferred_zonelist(gfp_t gfp_mask, const nodemask_t *prefmask, + const nodemask_t *bindmask) { nodemask_t pref; int nid, local_node = numa_mem_id();