From patchwork Fri Jun 19 16:24:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 11614651 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C66FF90 for ; Fri, 19 Jun 2020 16:25:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9D707217A0 for ; Fri, 19 Jun 2020 16:25:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9D707217A0 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D8F148D00E7; Fri, 19 Jun 2020 12:24:48 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C2A0D8D00EC; Fri, 19 Jun 2020 12:24:48 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 82B898D00E9; Fri, 19 Jun 2020 12:24:48 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0102.hostedemail.com [216.40.44.102]) by kanga.kvack.org (Postfix) with ESMTP id 666CC8D00E7 for ; Fri, 19 Jun 2020 12:24:48 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 2BA6E8248068 for ; Fri, 19 Jun 2020 16:24:48 +0000 (UTC) X-FDA: 76946485056.24.egg87_1b0122026e1a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin24.hostedemail.com (Postfix) with ESMTP id E8C8F11AF7F for ; Fri, 19 Jun 2020 16:24:47 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,ben.widawsky@intel.com,,RULES_HIT:30054:30064,0,RBL:192.55.52.120:@intel.com:.lbl8.mailshell.net-62.50.0.100 64.95.201.95,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:2,LUA_SUMMARY:none X-HE-Tag: egg87_1b0122026e1a X-Filterd-Recvd-Size: 5002 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Fri, 19 Jun 2020 16:24:46 +0000 (UTC) IronPort-SDR: xNfH2EiplV8jILpeC7bIVfbGbpoTGmammM+JSVN2BqWoSNxy3ZcioVi6FY3Mmc4WyCOW/h21Be bHgOH0RK73Ow== X-IronPort-AV: E=McAfee;i="6000,8403,9657"; a="140535486" X-IronPort-AV: E=Sophos;i="5.75,256,1589266800"; d="scan'208";a="140535486" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jun 2020 09:24:17 -0700 IronPort-SDR: kAIwCs9hK++6m1WRBo/RwQDsezRQjHq/7BQDh5vBm5G153XaFn75n6An8OiIRS6kTa1k0CtBiX 2TsNACPtnB5A== X-IronPort-AV: E=Sophos;i="5.75,255,1589266800"; d="scan'208";a="264366386" Received: from sjiang-mobl2.ccr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.131.131]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jun 2020 09:24:17 -0700 From: Ben Widawsky To: linux-mm Subject: [PATCH 09/18] mm: Finish handling MPOL_PREFERRED_MANY Date: Fri, 19 Jun 2020 09:24:05 -0700 Message-Id: <20200619162414.1052234-10-ben.widawsky@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200619162414.1052234-1-ben.widawsky@intel.com> References: <20200619162414.1052234-1-ben.widawsky@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: E8C8F11AF7F X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now that there is a function to generate the preferred zonelist given a preferred mask, bindmask, and flags it is possible to support MPOL_PREFERRED_MANY policy easily in more places. This patch was developed on top of Dave's original work. When Dave wrote his patches there was no clean way to implement MPOL_PREFERRED_MANY. Now that the other bits are in place, this is easy to drop on top. Cc: Andrew Morton Cc: Dan Williams Cc: Dave Hansen Cc: Mel Gorman Signed-off-by: Ben Widawsky --- include/linux/mmzone.h | 3 +++ mm/mempolicy.c | 20 ++++++++++++++++++-- mm/page_alloc.c | 5 ++--- 3 files changed, 23 insertions(+), 5 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index c4c37fd12104..6b62ee98bb96 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -1001,6 +1001,9 @@ struct zoneref *__next_zones_zonelist(struct zoneref *z, enum zone_type highest_zoneidx, nodemask_t *nodes); +struct zonelist *preferred_zonelist(gfp_t gfp_mask, const nodemask_t *prefmask, + const nodemask_t *bindmask); + /** * next_zones_zonelist - Returns the next zone at or below highest_zoneidx within the allowed nodemask using a cursor within a zonelist as a starting point * @z - The cursor used as a starting point for the search diff --git a/mm/mempolicy.c b/mm/mempolicy.c index bfc4ef2af90d..90bc9c93b1b9 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1995,7 +1995,6 @@ unsigned int mempolicy_slab_node(void) return node; switch (policy->mode) { - case MPOL_PREFERRED_MANY: case MPOL_PREFERRED: /* * handled MPOL_F_LOCAL above @@ -2020,6 +2019,18 @@ unsigned int mempolicy_slab_node(void) return z->zone ? zone_to_nid(z->zone) : node; } + case MPOL_PREFERRED_MANY: { + struct zoneref *z; + struct zonelist *zonelist; + enum zone_type highest_zoneidx = gfp_zone(GFP_KERNEL); + + zonelist = preferred_zonelist(GFP_KERNEL, + &policy->v.preferred_nodes, NULL); + z = first_zones_zonelist(zonelist, highest_zoneidx, + &policy->v.nodes); + return z->zone ? zone_to_nid(z->zone) : node; + } + default: BUG(); } @@ -2585,7 +2596,12 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long polnid = zone_to_nid(z->zone); break; - /* case MPOL_PREFERRED_MANY: */ + case MPOL_PREFERRED_MANY: + z = first_zones_zonelist(preferred_zonelist(GFP_HIGHUSER, + &pol->v.preferred_nodes, NULL), + gfp_zone(GFP_HIGHUSER), &pol->v.preferred_nodes); + polnid = zone_to_nid(z->zone); + break; default: BUG(); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 3cf44b6c31ae..c6f8f112a5d4 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4861,9 +4861,8 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, * NB: That zonelist will have *all* zones in the fallback case, and not all of * those zones will belong to preferred nodes. */ -static struct zonelist *preferred_zonelist(gfp_t gfp_mask, - const nodemask_t *prefmask, - const nodemask_t *bindmask) +struct zonelist *preferred_zonelist(gfp_t gfp_mask, const nodemask_t *prefmask, + const nodemask_t *bindmask) { nodemask_t pref; int nid, local_node = numa_mem_id();