From patchwork Fri Jun 19 16:24:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 11614617 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0C69390 for ; Fri, 19 Jun 2020 16:25:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CE1FC2168B for ; Fri, 19 Jun 2020 16:25:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CE1FC2168B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7E9088D00D9; Fri, 19 Jun 2020 12:24:36 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 6AF138D00DE; Fri, 19 Jun 2020 12:24:36 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2D6678D00D9; Fri, 19 Jun 2020 12:24:36 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0226.hostedemail.com [216.40.44.226]) by kanga.kvack.org (Postfix) with ESMTP id 015158D00DA for ; Fri, 19 Jun 2020 12:24:35 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id B6F9215B0B6 for ; Fri, 19 Jun 2020 16:24:35 +0000 (UTC) X-FDA: 76946484510.29.owner91_541300c26e1a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin29.hostedemail.com (Postfix) with ESMTP id 875F118086CAB for ; Fri, 19 Jun 2020 16:24:35 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,ben.widawsky@intel.com,,RULES_HIT:30054:30064,0,RBL:134.134.136.126:@intel.com:.lbl8.mailshell.net-62.18.0.100 64.95.201.95,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: owner91_541300c26e1a X-Filterd-Recvd-Size: 4007 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by imf34.hostedemail.com (Postfix) with ESMTP for ; Fri, 19 Jun 2020 16:24:34 +0000 (UTC) IronPort-SDR: BFZrcCmlXJlv5P67ShBGZKk+JHjv0qOfbqsDNXuiE49pZ10YHtrbQc29ehq7i4bMAY1XjbgG9g lwqgTD3uhKWQ== X-IronPort-AV: E=McAfee;i="6000,8403,9657"; a="130375198" X-IronPort-AV: E=Sophos;i="5.75,256,1589266800"; d="scan'208";a="130375198" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jun 2020 09:24:33 -0700 IronPort-SDR: RQQn/+0QFSpZPpp31xLKN9CYC9HpEInE/QJci4bR7T8sBihOUwBMi3syIWShzSGXC+GfljCd1r cdV3YCkX266g== X-IronPort-AV: E=Sophos;i="5.75,255,1589266800"; d="scan'208";a="264368539" Received: from sjiang-mobl2.ccr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.131.131]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jun 2020 09:24:32 -0700 From: Ben Widawsky To: linux-mm Cc: Ben Widawsky , Andrew Morton , Michal Hocko , Tejun Heo Subject: [PATCH 17/18] mm: Use less stack for page allocations Date: Fri, 19 Jun 2020 09:24:24 -0700 Message-Id: <20200619162425.1052382-18-ben.widawsky@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200619162425.1052382-1-ben.widawsky@intel.com> References: <20200619162425.1052382-1-ben.widawsky@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 875F118086CAB X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: After converting __alloc_pages_nodemask to take in a preferred nodoemask, __alloc_pages_node is left holding the bag as requiring stack space since it needs to generate a nodemask for the specific node. The patch attempts to remove all callers of it unless absolutely necessary to avoid using stack space which is theoretically significant in huge NUMA systems. It turns out there aren't too many opportunities to do this as all callers know exactly what they want. The difference between __alloc_pages_node and alloc_pages_node is the former is meant for explicit node allocation while the latter support providing no preference (by specifying NUMA_NO_NODE as nid). Now it becomes clear that NUMA_NO_NODE can be implemented without using stack space via some of the newer functions that have been added, in particular, __alloc_pages_nodes and __alloc_pages_nodemask. In the non NUMA case, alloc_pages used numa_node_id(), which is 0. Switching to NUMA_NO_NODE allows us to avoid using the stack. Cc: Andrew Morton Cc: Michal Hocko Cc: Tejun Heo Signed-off-by: Ben Widawsky --- include/linux/gfp.h | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 47e9c02c17ae..e78982ef9349 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -532,7 +532,7 @@ static inline struct page *alloc_pages_node(int nid, gfp_t gfp_mask, unsigned int order) { if (nid == NUMA_NO_NODE) - nid = numa_mem_id(); + return __alloc_pages_nodes(NULL, gfp_mask, order); return __alloc_pages_node(nid, gfp_mask, order); } @@ -551,8 +551,8 @@ extern struct page *alloc_pages_vma(gfp_t gfp_mask, int order, #define alloc_hugepage_vma(gfp_mask, vma, addr, order) \ alloc_pages_vma(gfp_mask, order, vma, addr, numa_node_id(), true) #else -#define alloc_pages(gfp_mask, order) \ - alloc_pages_node(numa_node_id(), gfp_mask, order) +#define alloc_pages(gfp_mask, order) \ + alloc_pages_node(NUMA_NO_NODE, gfp_mask, order) #define alloc_pages_vma(gfp_mask, order, vma, addr, node, false)\ alloc_pages(gfp_mask, order) #define alloc_hugepage_vma(gfp_mask, vma, addr, order) \