From patchwork Fri Jun 19 16:24:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 11614621 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D3F3B90 for ; Fri, 19 Jun 2020 16:25:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AAEEA218AC for ; Fri, 19 Jun 2020 16:25:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AAEEA218AC Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1A3B98D00DF; Fri, 19 Jun 2020 12:24:37 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 111758D00DE; Fri, 19 Jun 2020 12:24:37 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D5C188D00E0; Fri, 19 Jun 2020 12:24:36 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0101.hostedemail.com [216.40.44.101]) by kanga.kvack.org (Postfix) with ESMTP id A74B08D00D8 for ; Fri, 19 Jun 2020 12:24:36 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 6564D180AD81A for ; Fri, 19 Jun 2020 16:24:36 +0000 (UTC) X-FDA: 76946484552.21.boats94_561165026e1a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin21.hostedemail.com (Postfix) with ESMTP id 403EC180442CB for ; Fri, 19 Jun 2020 16:24:36 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,ben.widawsky@intel.com,,RULES_HIT:30012:30054:30064:30091,0,RBL:192.55.52.93:@intel.com:.lbl8.mailshell.net-64.95.201.95 62.50.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:1,LUA_SUMMARY:none X-HE-Tag: boats94_561165026e1a X-Filterd-Recvd-Size: 4308 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by imf44.hostedemail.com (Postfix) with ESMTP for ; Fri, 19 Jun 2020 16:24:35 +0000 (UTC) IronPort-SDR: Ex5ldL4pD5mxg1OfaTbcnjLeQ+Uk5H5jrMcB1U1gbnWUmMUPOORRxHZGoJa/T1oE7qFkDlKNYf svLLioX43ciQ== X-IronPort-AV: E=McAfee;i="6000,8403,9657"; a="141280170" X-IronPort-AV: E=Sophos;i="5.75,255,1589266800"; d="scan'208";a="141280170" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jun 2020 09:24:31 -0700 IronPort-SDR: GGwQLHAF1ILWCYkvG/muppFE/J8O/fExJvl2irhTRwMgAczaKATR80h+Wj9jzRUl4U2+0usKix GUF8z7udl6HQ== X-IronPort-AV: E=Sophos;i="5.75,255,1589266800"; d="scan'208";a="264368380" Received: from sjiang-mobl2.ccr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.131.131]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jun 2020 09:24:31 -0700 From: Ben Widawsky To: linux-mm Cc: Ben Widawsky , Andrew Morton , Michal Hocko Subject: [PATCH 13/18] mm: kill __alloc_pages Date: Fri, 19 Jun 2020 09:24:20 -0700 Message-Id: <20200619162425.1052382-14-ben.widawsky@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200619162425.1052382-1-ben.widawsky@intel.com> References: <20200619162425.1052382-1-ben.widawsky@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 403EC180442CB X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: IMPORTANT NOTE: It's unclear how safe it is to declare nodemask_t on the stack, when nodemask_t can be relatively large in huge NUMA systems. Upcoming patches will try to limit this. The primary purpose of this patch is to clear up which interfaces should be used for page allocation. There are several attributes in page allocation after the obvious gfp and order: 1. node mask: set of nodes to try to allocate from, fail if unavailable 2. preferred nid: a preferred node to try to allocate from, falling back to node mask if unavailable 3. (soon) preferred mask: like preferred nid, but multiple nodes. Here's a summary of the existing interfaces, and which they cover *alloc_pages: () *alloc_pages_node: (2) __alloc_pages_nodemask: (1,2,3) I am instead proposing instead the following interfaces as a reasonable set. Generally node binding isn't used by kernel code, it's only used for mempolicy. On the other hand, the kernel does have preferred nodes (today it's only one), and that is why those interfaces exist while an interface to specify binding does not. alloc_pages: () I don't care, give me pages. alloc_pages_node: (2) I want pages from this particular node first alloc_pages_nodes: (3) I want pages from *these* nodes first __alloc_pages_nodemask: (1,2,3) I'm picky about my pages Cc: Andrew Morton Cc: Michal Hocko Signed-off-by: Ben Widawsky --- include/linux/gfp.h | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 67a0774e080b..9ab5c07579bd 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -504,9 +504,10 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid, nodemask_t *nodemask); static inline struct page * -__alloc_pages(gfp_t gfp_mask, unsigned int order, int preferred_nid) +__alloc_pages_nodes(nodemask_t *nodes, gfp_t gfp_mask, unsigned int order) { - return __alloc_pages_nodemask(gfp_mask, order, preferred_nid, NULL); + return __alloc_pages_nodemask(gfp_mask, order, first_node(*nodes), + NULL); } /* @@ -516,10 +517,12 @@ __alloc_pages(gfp_t gfp_mask, unsigned int order, int preferred_nid) static inline struct page * __alloc_pages_node(int nid, gfp_t gfp_mask, unsigned int order) { + nodemask_t tmp; VM_BUG_ON(nid < 0 || nid >= MAX_NUMNODES); VM_WARN_ON((gfp_mask & __GFP_THISNODE) && !node_online(nid)); - return __alloc_pages(gfp_mask, order, nid); + tmp = nodemask_of_node(nid); + return __alloc_pages_nodes(&tmp, gfp_mask, order); } /*