Message ID | 20200619162414.1052234-13-ben.widawsky@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show
Return-Path: <SRS0=VjTG=AA=kvack.org=owner-linux-mm@kernel.org> Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 47B4214B7 for <patchwork-linux-mm@patchwork.kernel.org>; Fri, 19 Jun 2020 16:24:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1E40321707 for <patchwork-linux-mm@patchwork.kernel.org>; Fri, 19 Jun 2020 16:24:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1E40321707 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2D6008D00CD; Fri, 19 Jun 2020 12:24:22 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 285B78D00AD; Fri, 19 Jun 2020 12:24:22 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 175BB8D00CD; Fri, 19 Jun 2020 12:24:22 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0220.hostedemail.com [216.40.44.220]) by kanga.kvack.org (Postfix) with ESMTP id F1B668D00AD for <linux-mm@kvack.org>; Fri, 19 Jun 2020 12:24:21 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 5DC988B339 for <linux-mm@kvack.org>; Fri, 19 Jun 2020 16:24:21 +0000 (UTC) X-FDA: 76946483922.23.wrist24_180f6b526e1a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin23.hostedemail.com (Postfix) with ESMTP id 87B8837608 for <linux-mm@kvack.org>; Fri, 19 Jun 2020 16:24:20 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,ben.widawsky@intel.com,,RULES_HIT:30051:30054:30064:30070,0,RBL:134.134.136.100:@intel.com:.lbl8.mailshell.net-64.95.201.95 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: wrist24_180f6b526e1a X-Filterd-Recvd-Size: 2898 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by imf32.hostedemail.com (Postfix) with ESMTP for <linux-mm@kvack.org>; Fri, 19 Jun 2020 16:24:19 +0000 (UTC) IronPort-SDR: TQ3DdGy+/xhSYPqFgrAn1LKRFxojw2pWONYZ9O4qaf3qwIktu1uKOa72FahrRg/iOre4jPaTd5 7MzCucmpX7cw== X-IronPort-AV: E=McAfee;i="6000,8403,9657"; a="208241116" X-IronPort-AV: E=Sophos;i="5.75,255,1589266800"; d="scan'208";a="208241116" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jun 2020 09:24:17 -0700 IronPort-SDR: Hvhi7AJpICn2HqU25B0T6f6LqqvneYm9HsaSM0WauNeH+2Z5FzzHhmX4Tyvl33G0HX2o1Cecpr HTEY5ufvMZZQ== X-IronPort-AV: E=Sophos;i="5.75,255,1589266800"; d="scan'208";a="264366449" Received: from sjiang-mobl2.ccr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.131.131]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jun 2020 09:24:17 -0700 From: Ben Widawsky <ben.widawsky@intel.com> To: linux-mm <linux-mm@kvack.org> Subject: [PATCH 12/18] mm/mempolicy: Use __alloc_page_node for interleaved Date: Fri, 19 Jun 2020 09:24:08 -0700 Message-Id: <20200619162414.1052234-13-ben.widawsky@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200619162414.1052234-1-ben.widawsky@intel.com> References: <20200619162414.1052234-1-ben.widawsky@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 87B8837608 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: <linux-mm.kvack.org> |
Series |
multiple preferred nodes
|
expand
|
diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 3ce2354fed44..eb2520d68a04 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2220,7 +2220,7 @@ static struct page *alloc_page_interleave(gfp_t gfp, unsigned order, { struct page *page; - page = __alloc_pages(gfp, order, nid); + page = __alloc_pages_node(nid, gfp, order); /* skip NUMA_INTERLEAVE_HIT counter update if numa stats is disabled */ if (!static_branch_likely(&vm_numa_stat_key)) return page;
This helps reduce the consumers of the interface and get us in better shape to clean up some of the low level page allocation routines. The goal in doing that is to eventually limit the places we'll need to declare nodemask_t variables on the stack (more on that later). Currently the only distinction between __alloc_pages_node and __alloc_pages is that the former does sanity checks on the gfp flags and the nid. In the case of interleave nodes, this isn't necessary because the caller has already figured out the right nid and flags with interleave_nodes(), This kills the only real user of __alloc_pages, which can then be removed later. Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> --- mm/mempolicy.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)