From patchwork Fri Jun 19 16:24:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 11614587 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EBF3E90 for ; Fri, 19 Jun 2020 16:24:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C266B2168B for ; Fri, 19 Jun 2020 16:24:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C266B2168B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DFB218D00CE; Fri, 19 Jun 2020 12:24:23 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id DAA008D00D0; Fri, 19 Jun 2020 12:24:23 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C4BC98D00CE; Fri, 19 Jun 2020 12:24:23 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0152.hostedemail.com [216.40.44.152]) by kanga.kvack.org (Postfix) with ESMTP id 9794B8D00D0 for ; Fri, 19 Jun 2020 12:24:23 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 5B648181AC9C6 for ; Fri, 19 Jun 2020 16:24:23 +0000 (UTC) X-FDA: 76946484006.21.field52_1e08c6026e1a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin21.hostedemail.com (Postfix) with ESMTP id 357F2180442CD for ; Fri, 19 Jun 2020 16:24:23 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,ben.widawsky@intel.com,,RULES_HIT:30012:30054:30064:30091,0,RBL:134.134.136.100:@intel.com:.lbl8.mailshell.net-64.95.201.95 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: field52_1e08c6026e1a X-Filterd-Recvd-Size: 4187 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Fri, 19 Jun 2020 16:24:22 +0000 (UTC) IronPort-SDR: RM9sbYDUHcAwvID5v3zE7tHkRWaT2nXW48toCYEdOTGM7ywAER+tdAGmKGIv/88gaxQGjJCs6B qBPEpAAfBTOw== X-IronPort-AV: E=McAfee;i="6000,8403,9657"; a="208241117" X-IronPort-AV: E=Sophos;i="5.75,255,1589266800"; d="scan'208";a="208241117" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jun 2020 09:24:17 -0700 IronPort-SDR: wTLYaLdxXcb9HP4yMwnhyFLZCDPAvQ1IMDewWFZBvyYh9EAoXxqiREZM7HVipny3k8BgJPN/pU 8hWhA1w0v/TQ== X-IronPort-AV: E=Sophos;i="5.75,255,1589266800"; d="scan'208";a="264366461" Received: from sjiang-mobl2.ccr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.131.131]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jun 2020 09:24:17 -0700 From: Ben Widawsky To: linux-mm Subject: [PATCH 13/18] mm: kill __alloc_pages Date: Fri, 19 Jun 2020 09:24:09 -0700 Message-Id: <20200619162414.1052234-14-ben.widawsky@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200619162414.1052234-1-ben.widawsky@intel.com> References: <20200619162414.1052234-1-ben.widawsky@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 357F2180442CD X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: IMPORTANT NOTE: It's unclear how safe it is to declare nodemask_t on the stack, when nodemask_t can be relatively large in huge NUMA systems. Upcoming patches will try to limit this. The primary purpose of this patch is to clear up which interfaces should be used for page allocation. There are several attributes in page allocation after the obvious gfp and order: 1. node mask: set of nodes to try to allocate from, fail if unavailable 2. preferred nid: a preferred node to try to allocate from, falling back to node mask if unavailable 3. (soon) preferred mask: like preferred nid, but multiple nodes. Here's a summary of the existing interfaces, and which they cover *alloc_pages: () *alloc_pages_node: (2) __alloc_pages_nodemask: (1,2,3) I am instead proposing instead the following interfaces as a reasonable set. Generally node binding isn't used by kernel code, it's only used for mempolicy. On the other hand, the kernel does have preferred nodes (today it's only one), and that is why those interfaces exist while an interface to specify binding does not. alloc_pages: () I don't care, give me pages. alloc_pages_node: (2) I want pages from this particular node first alloc_pages_nodes: (3) I want pages from *these* nodes first __alloc_pages_nodemask: (1,2,3) I'm picky about my pages Cc: Andrew Morton Cc: Michal Hocko Signed-off-by: Ben Widawsky --- include/linux/gfp.h | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 67a0774e080b..9ab5c07579bd 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -504,9 +504,10 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid, nodemask_t *nodemask); static inline struct page * -__alloc_pages(gfp_t gfp_mask, unsigned int order, int preferred_nid) +__alloc_pages_nodes(nodemask_t *nodes, gfp_t gfp_mask, unsigned int order) { - return __alloc_pages_nodemask(gfp_mask, order, preferred_nid, NULL); + return __alloc_pages_nodemask(gfp_mask, order, first_node(*nodes), + NULL); } /* @@ -516,10 +517,12 @@ __alloc_pages(gfp_t gfp_mask, unsigned int order, int preferred_nid) static inline struct page * __alloc_pages_node(int nid, gfp_t gfp_mask, unsigned int order) { + nodemask_t tmp; VM_BUG_ON(nid < 0 || nid >= MAX_NUMNODES); VM_WARN_ON((gfp_mask & __GFP_THISNODE) && !node_online(nid)); - return __alloc_pages(gfp_mask, order, nid); + tmp = nodemask_of_node(nid); + return __alloc_pages_nodes(&tmp, gfp_mask, order); } /*