From patchwork Wed Mar 3 10:20:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Tang X-Patchwork-Id: 12113241 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 861A7C433E0 for ; Wed, 3 Mar 2021 10:21:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1D40364EDF for ; Wed, 3 Mar 2021 10:21:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1D40364EDF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 540168D014D; Wed, 3 Mar 2021 05:21:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4F0118D0135; Wed, 3 Mar 2021 05:21:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 392298D014D; Wed, 3 Mar 2021 05:21:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0030.hostedemail.com [216.40.44.30]) by kanga.kvack.org (Postfix) with ESMTP id 13B5A8D0135 for ; Wed, 3 Mar 2021 05:21:38 -0500 (EST) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id CA5351812870B for ; Wed, 3 Mar 2021 10:21:37 +0000 (UTC) X-FDA: 77878171434.11.700135B Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by imf10.hostedemail.com (Postfix) with ESMTP id D6160407F8F7 for ; Wed, 3 Mar 2021 10:21:35 +0000 (UTC) IronPort-SDR: 3u3plT+/o0yKsxJ56lzuhpM0/ZF6FQxXEzp1pqaoJH4P0xUYeRtWHxx9dEJERhd3wWBCAfXKzP 9r+fFYSoe+/w== X-IronPort-AV: E=McAfee;i="6000,8403,9911"; a="272162838" X-IronPort-AV: E=Sophos;i="5.81,219,1610438400"; d="scan'208";a="272162838" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Mar 2021 02:21:36 -0800 IronPort-SDR: U5u7VYbZWHEF2Ff6yC7yZbO3bdpzp3c3suxMeAdpZHy0+OTi4f/vM1dfA8myM1l4C0WGLnErCE ykSqLlkpFU/Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,219,1610438400"; d="scan'208";a="445200300" Received: from shbuild999.sh.intel.com ([10.239.146.165]) by orsmga001.jf.intel.com with ESMTP; 03 Mar 2021 02:21:33 -0800 From: Feng Tang To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Michal Hocko , Andrea Arcangeli , David Rientjes , Mel Gorman , Mike Kravetz , Randy Dunlap , Vlastimil Babka , Dave Hansen , Ben Widawsky , Andi leen , Dan Williams , Feng Tang Subject: [PATCH v3 09/14] mm/mempolicy: Thread allocation for many preferred Date: Wed, 3 Mar 2021 18:20:53 +0800 Message-Id: <1614766858-90344-10-git-send-email-feng.tang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1614766858-90344-1-git-send-email-feng.tang@intel.com> References: <1614766858-90344-1-git-send-email-feng.tang@intel.com> X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: D6160407F8F7 X-Stat-Signature: 33nmdk1fmax8mrgfz71xuexk6mx465jj Received-SPF: none (intel.com>: No applicable sender policy available) receiver=imf10; identity=mailfrom; envelope-from=""; helo=mga05.intel.com; client-ip=192.55.52.43 X-HE-DKIM-Result: none/none X-HE-Tag: 1614766895-280834 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Ben Widawsky In order to support MPOL_PREFERRED_MANY as the mode used by set_mempolicy(2), alloc_pages_current() needs to support it. This patch does that by using the new helper function to allocate properly based on policy. All the actual machinery to make this work was part of ("mm/mempolicy: Create a page allocator for policy") Link: https://lore.kernel.org/r/20200630212517.308045-10-ben.widawsky@intel.com Signed-off-by: Ben Widawsky Signed-off-by: Feng Tang --- mm/mempolicy.c | 11 +++-------- 1 file changed, 3 insertions(+), 8 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index a737e02..ceee90e 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2343,7 +2343,7 @@ EXPORT_SYMBOL(alloc_pages_vma); struct page *alloc_pages_current(gfp_t gfp, unsigned order) { struct mempolicy *pol = &default_policy; - struct page *page; + int nid = NUMA_NO_NODE; if (!in_interrupt() && !(gfp & __GFP_THISNODE)) pol = get_task_policy(current); @@ -2353,14 +2353,9 @@ struct page *alloc_pages_current(gfp_t gfp, unsigned order) * nor system default_policy */ if (pol->mode == MPOL_INTERLEAVE) - page = alloc_pages_policy(pol, gfp, order, - interleave_nodes(pol)); - else - page = __alloc_pages_nodemask(gfp, order, - policy_node(gfp, pol, numa_node_id()), - policy_nodemask(gfp, pol)); + nid = interleave_nodes(pol); - return page; + return alloc_pages_policy(pol, gfp, order, nid); } EXPORT_SYMBOL(alloc_pages_current);