From patchwork Wed Mar 17 03:40:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Tang X-Patchwork-Id: 12144675 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E5BCC433DB for ; Wed, 17 Mar 2021 03:40:59 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F376764F8F for ; Wed, 17 Mar 2021 03:40:58 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F376764F8F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9728E6B0082; Tue, 16 Mar 2021 23:40:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 949DA6B0083; Tue, 16 Mar 2021 23:40:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 776236B0085; Tue, 16 Mar 2021 23:40:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0107.hostedemail.com [216.40.44.107]) by kanga.kvack.org (Postfix) with ESMTP id 52CF86B0082 for ; Tue, 16 Mar 2021 23:40:58 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 1E1171730860 for ; Wed, 17 Mar 2021 03:40:58 +0000 (UTC) X-FDA: 77927964996.24.D506960 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by imf29.hostedemail.com (Postfix) with ESMTP id 59C21D5 for ; Wed, 17 Mar 2021 03:40:57 +0000 (UTC) IronPort-SDR: MXRrecA/wmZoOQ1flQBAUBTHrWyL3YusJSRn8ZOcKpP6YT03UsP+6xfn+oey3AonrONtDphjT/ JZUtvnoUI2zg== X-IronPort-AV: E=McAfee;i="6000,8403,9925"; a="168653958" X-IronPort-AV: E=Sophos;i="5.81,254,1610438400"; d="scan'208";a="168653958" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Mar 2021 20:40:57 -0700 IronPort-SDR: D4DIn8Q1zHs7AzCJq/fLGQ0WGwcybIcZdIIeXYqq/rGoySNanbHfI2piUeHNDAyIsmth+LpUvI TeYAcNkulFQg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,254,1610438400"; d="scan'208";a="602076048" Received: from shbuild999.sh.intel.com ([10.239.147.94]) by fmsmga006.fm.intel.com with ESMTP; 16 Mar 2021 20:40:53 -0700 From: Feng Tang To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Michal Hocko , Andrea Arcangeli , David Rientjes , Mel Gorman , Mike Kravetz , Randy Dunlap , Vlastimil Babka , Dave Hansen , Ben Widawsky , Andi Kleen , Dan Williams , Feng Tang Subject: [PATCH v4 11/13] mm/mempolicy: huge-page allocation for many preferred Date: Wed, 17 Mar 2021 11:40:08 +0800 Message-Id: <1615952410-36895-12-git-send-email-feng.tang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1615952410-36895-1-git-send-email-feng.tang@intel.com> References: <1615952410-36895-1-git-send-email-feng.tang@intel.com> X-Stat-Signature: 8dknst63n1ac7mtis6jhs6mkdeouoq6o X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 59C21D5 Received-SPF: none (intel.com>: No applicable sender policy available) receiver=imf29; identity=mailfrom; envelope-from=""; helo=mga12.intel.com; client-ip=192.55.52.136 X-HE-DKIM-Result: none/none X-HE-Tag: 1615952457-660399 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Ben Widawsky Implement the missing huge page allocation functionality while obeying the preferred node semantics. This uses a fallback mechanism to try multiple preferred nodes first, and then all other nodes. It cannot use the helper function that was introduced because huge page allocation already has its own helpers and it was more LOC, and effort to try to consolidate that. The weirdness is MPOL_PREFERRED_MANY can't be called yet because it is part of the UAPI we haven't yet exposed. Instead of make that define global, it's simply changed with the UAPI patch. [ feng: add NOWARN flag, and skip the direct reclaim to speedup allocation in some case ] Link: https://lore.kernel.org/r/20200630212517.308045-12-ben.widawsky@intel.com Signed-off-by: Ben Widawsky Signed-off-by: Feng Tang Reported-by: kernel test robot --- mm/hugetlb.c | 26 +++++++++++++++++++++++--- mm/mempolicy.c | 3 ++- 2 files changed, 25 insertions(+), 4 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 8fb42c6..9dfbfa3 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1105,7 +1105,7 @@ static struct page *dequeue_huge_page_vma(struct hstate *h, unsigned long address, int avoid_reserve, long chg) { - struct page *page; + struct page *page = NULL; struct mempolicy *mpol; gfp_t gfp_mask; nodemask_t *nodemask; @@ -1126,7 +1126,17 @@ static struct page *dequeue_huge_page_vma(struct hstate *h, gfp_mask = htlb_alloc_mask(h); nid = huge_node(vma, address, gfp_mask, &mpol, &nodemask); - page = dequeue_huge_page_nodemask(h, gfp_mask, nid, nodemask); + if (mpol->mode != MPOL_BIND && nodemask) { /* AKA MPOL_PREFERRED_MANY */ + gfp_t gfp_mask1 = gfp_mask | __GFP_NOWARN; + + gfp_mask1 &= ~__GFP_DIRECT_RECLAIM; + page = dequeue_huge_page_nodemask(h, + gfp_mask1, nid, nodemask); + if (!page) + page = dequeue_huge_page_nodemask(h, gfp_mask, nid, NULL); + } else { + page = dequeue_huge_page_nodemask(h, gfp_mask, nid, nodemask); + } if (page && !avoid_reserve && vma_has_reserves(vma, chg)) { SetHPageRestoreReserve(page); h->resv_huge_pages--; @@ -1883,7 +1893,17 @@ struct page *alloc_buddy_huge_page_with_mpol(struct hstate *h, nodemask_t *nodemask; nid = huge_node(vma, addr, gfp_mask, &mpol, &nodemask); - page = alloc_surplus_huge_page(h, gfp_mask, nid, nodemask); + if (mpol->mode != MPOL_BIND && nodemask) { /* AKA MPOL_PREFERRED_MANY */ + gfp_t gfp_mask1 = gfp_mask | __GFP_NOWARN; + + gfp_mask1 &= ~__GFP_DIRECT_RECLAIM; + page = alloc_surplus_huge_page(h, + gfp_mask1, nid, nodemask); + if (!page) + alloc_surplus_huge_page(h, gfp_mask, nid, NULL); + } else { + page = alloc_surplus_huge_page(h, gfp_mask, nid, nodemask); + } mpol_cond_put(mpol); return page; diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 8fe76a7..40d32cb 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2085,7 +2085,8 @@ int huge_node(struct vm_area_struct *vma, unsigned long addr, gfp_t gfp_flags, huge_page_shift(hstate_vma(vma))); } else { nid = policy_node(gfp_flags, *mpol, numa_node_id()); - if ((*mpol)->mode == MPOL_BIND) + if ((*mpol)->mode == MPOL_BIND || + (*mpol)->mode == MPOL_PREFERRED_MANY) *nodemask = &(*mpol)->nodes; } return nid;