From patchwork Thu Aug 4 13:03:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Tang X-Patchwork-Id: 12936357 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36B90C19F2A for ; Thu, 4 Aug 2022 13:03:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 433098E0001; Thu, 4 Aug 2022 09:03:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3E1B16B0072; Thu, 4 Aug 2022 09:03:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2A95B8E0001; Thu, 4 Aug 2022 09:03:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 1A7846B0071 for ; Thu, 4 Aug 2022 09:03:45 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id C51354041D for ; Thu, 4 Aug 2022 13:03:44 +0000 (UTC) X-FDA: 79761927168.25.43F5835 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by imf18.hostedemail.com (Postfix) with ESMTP id C95871C013C for ; Thu, 4 Aug 2022 13:03:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1659618222; x=1691154222; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=fjBaY/FwafB1AaXjG6rMcxtrVelzXUJzoU+isxw1T7E=; b=PHdlE8qYku5OC58cj6EwnXvpQ6ZRhmQq1FvwTeVBjfoC3zbKQkXGa254 ysRTxNM/cUKsPsA/AJ3IzOsf9Lh6uTNnCZSJ6tdTXA/r9s7Motrx3dezl O5z8CYNTYst7zwMLQI8eKpAP5JGCIUTVF1SqrKA2uszKuElLXJp9xHRkd KhPnadD+uKVmkYZvqhvbDRKJlwt5ejXb2F/Tji4oinu+MrynvFLk+846V b74650aZszHX62u0UZnE676xsdW78WhsFFUxmnu67PEQessVlDa917pug BwNdoYqXMy48l1Lh9dXHVlWq6X2XM30wBhU6cVCSmYWGYOxh51uAdBPsU g==; X-IronPort-AV: E=McAfee;i="6400,9594,10429"; a="315801339" X-IronPort-AV: E=Sophos;i="5.93,215,1654585200"; d="scan'208";a="315801339" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Aug 2022 06:03:16 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,215,1654585200"; d="scan'208";a="631576174" Received: from shbuild999.sh.intel.com ([10.239.147.181]) by orsmga008.jf.intel.com with ESMTP; 04 Aug 2022 06:03:13 -0700 From: Feng Tang To: Michal Hocko , Muchun Song , Mike Kravetz , Andrew Morton Cc: Dave Hansen , Ben Widawsky , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Feng Tang Subject: [PATCH] mm/hugetlb: add dedicated func to get 'allowed' nodemask for current process Date: Thu, 4 Aug 2022 21:03:42 +0800 Message-Id: <20220804130342.63355-1-feng.tang@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: References: MIME-Version: 1.0 ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=PHdlE8qY; spf=pass (imf18.hostedemail.com: domain of feng.tang@intel.com designates 192.55.52.88 as permitted sender) smtp.mailfrom=feng.tang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1659618223; a=rsa-sha256; cv=none; b=0dK5PtgAo4kpGk8alorUvCyn0SJeLqZEkanY40BsLDsuidFpxphFZt5aAA7R1A7xXpipbH WOd997V44AyzhU7anhuvVwsR7eOuQM/qqzXh1rQLREdvuMa93+n2RHQ+iK4hGJZ7RmX8k5 Qn1tUpwRAz2owbMJtfqzjrYP2LtsJTc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1659618223; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=YDb+bYazItyxc0Wz6ZcKRKPwPlehtZl45uy7vU68954=; b=0y4RiuGeb/5ujgBpliS42m1DdXUkwRoVnGEpWDcFFmN4jCZGE7qOd3xcOzQwLSq0zYHoWt yPmCfowh06njzqYpyiuuhNG9gBpEstZ+/6joi2kmzK8TUd6OAnuXdVa5BeIBzRbJT3hjcp gsXGazICYYKXhipMi5rf1CjsgJTLUz4= X-Stat-Signature: hsn19s5e3o55woqxd31x8bsno1ug9r63 X-Rspamd-Queue-Id: C95871C013C Authentication-Results: imf18.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=PHdlE8qY; spf=pass (imf18.hostedemail.com: domain of feng.tang@intel.com designates 192.55.52.88 as permitted sender) smtp.mailfrom=feng.tang@intel.com; dmarc=pass (policy=none) header.from=intel.com X-Rspam-User: X-Rspamd-Server: rspam12 X-HE-Tag: 1659618222-494842 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Muchun Song found that after MPOL_PREFERRED_MANY policy was introduced in commit b27abaccf8e8 ("mm/mempolicy: add MPOL_PREFERRED_MANY for multiple preferred nodes") [1], the policy_nodemask_current()'s semantics for this new policy has been changed, which returns 'preferred' nodes instead of 'allowed' nodes, and could hurt the usage of its caller in hugetlb: allowed_mems_nr(). Michal found the policy_nodemask_current() is only used by hugetlb, and suggested to move it to hugetlb code with more explicit name to enforce the 'allowed' semantics for which only MPOL_BIND policy matters. One note for the new policy_mbind_nodemask() is, the cross check from MPOL_BIND, gfp flags and cpuset configuration can lead to a no available node case, which is considered to be broken configuration, and 'NULL' (equals all nodes) will be returned. apply_policy_zone() is made extern to be called in hugetlb code and its return value is changed to bool. [1]. https://lore.kernel.org/lkml/20220801084207.39086-1-songmuchun@bytedance.com/t/ Reported-by: Muchun Song Suggested-by: Michal Hocko Signed-off-by: Feng Tang Acked-by: Michal Hocko --- include/linux/mempolicy.h | 13 +------------ mm/hugetlb.c | 24 ++++++++++++++++++++---- mm/mempolicy.c | 2 +- 3 files changed, 22 insertions(+), 17 deletions(-) diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h index 668389b4b53d..d232de7cdc56 100644 --- a/include/linux/mempolicy.h +++ b/include/linux/mempolicy.h @@ -151,13 +151,6 @@ extern bool mempolicy_in_oom_domain(struct task_struct *tsk, const nodemask_t *mask); extern nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *policy); -static inline nodemask_t *policy_nodemask_current(gfp_t gfp) -{ - struct mempolicy *mpol = get_task_policy(current); - - return policy_nodemask(gfp, mpol); -} - extern unsigned int mempolicy_slab_node(void); extern enum zone_type policy_zone; @@ -189,6 +182,7 @@ static inline bool mpol_is_preferred_many(struct mempolicy *pol) return (pol->mode == MPOL_PREFERRED_MANY); } +extern bool apply_policy_zone(struct mempolicy *policy, enum zone_type zone); #else @@ -294,11 +288,6 @@ static inline void mpol_put_task_policy(struct task_struct *task) { } -static inline nodemask_t *policy_nodemask_current(gfp_t gfp) -{ - return NULL; -} - static inline bool mpol_is_preferred_many(struct mempolicy *pol) { return false; diff --git a/mm/hugetlb.c b/mm/hugetlb.c index a18c071c294e..ad84bb85b6de 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -4330,18 +4330,34 @@ static int __init default_hugepagesz_setup(char *s) } __setup("default_hugepagesz=", default_hugepagesz_setup); +static nodemask_t *policy_mbind_nodemask(gfp_t gfp) +{ +#ifdef CONFIG_NUMA + struct mempolicy *mpol = get_task_policy(current); + + /* + * Only enforce MPOL_BIND policy which overlaps with cpuset policy + * (from policy_nodemask) specifically for hugetlb case + */ + if (mpol->mode == MPOL_BIND && + (apply_policy_zone(mpol, gfp_zone(gfp)) && + cpuset_nodemask_valid_mems_allowed(&mpol->nodes))) + return &mpol->nodes; +#endif + return NULL; +} + static unsigned int allowed_mems_nr(struct hstate *h) { int node; unsigned int nr = 0; - nodemask_t *mpol_allowed; + nodemask_t *mbind_nodemask; unsigned int *array = h->free_huge_pages_node; gfp_t gfp_mask = htlb_alloc_mask(h); - mpol_allowed = policy_nodemask_current(gfp_mask); - + mbind_nodemask = policy_mbind_nodemask(gfp_mask); for_each_node_mask(node, cpuset_current_mems_allowed) { - if (!mpol_allowed || node_isset(node, *mpol_allowed)) + if (!mbind_nodemask || node_isset(node, *mbind_nodemask)) nr += array[node]; } diff --git a/mm/mempolicy.c b/mm/mempolicy.c index d39b01fd52fe..9f15bc533601 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1805,7 +1805,7 @@ bool vma_policy_mof(struct vm_area_struct *vma) return pol->flags & MPOL_F_MOF; } -static int apply_policy_zone(struct mempolicy *policy, enum zone_type zone) +bool apply_policy_zone(struct mempolicy *policy, enum zone_type zone) { enum zone_type dynamic_policy_zone = policy_zone;