From patchwork Mon Sep 25 08:26:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 13397481 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 373FCCE7A81 for ; Mon, 25 Sep 2023 08:27:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C31FE8D0012; Mon, 25 Sep 2023 04:27:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BE0E68D0001; Mon, 25 Sep 2023 04:27:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AA8D98D0012; Mon, 25 Sep 2023 04:27:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 9BA438D0001 for ; Mon, 25 Sep 2023 04:27:01 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 7A6B540AEA for ; Mon, 25 Sep 2023 08:27:01 +0000 (UTC) X-FDA: 81274439442.22.DC7C7FE Received: from mail-yw1-f173.google.com (mail-yw1-f173.google.com [209.85.128.173]) by imf22.hostedemail.com (Postfix) with ESMTP id A5648C0026 for ; Mon, 25 Sep 2023 08:26:59 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=bAt5vta8; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf22.hostedemail.com: domain of hughd@google.com designates 209.85.128.173 as permitted sender) smtp.mailfrom=hughd@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1695630419; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=u9hKZK3ZUeqzyIltnhSgdeSnOl5bDTv2bcCM7fdUw7E=; b=68s42yLrpapTl7KxMTqFyBYZzEgCubDpwm+hiFYk1uzB4HvSJIE2/rlMMNNh/aP/xfuQ+0 Iy6SUbXwD8S2Rgf5pGQbxvVWfEnExwECSszRSF5/7tRzSGjC3Hmdpqv2bGd5ncMt3lq/za wpOV/SMf7QyoIh/GGz635aWJoVin1oo= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=bAt5vta8; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf22.hostedemail.com: domain of hughd@google.com designates 209.85.128.173 as permitted sender) smtp.mailfrom=hughd@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1695630419; a=rsa-sha256; cv=none; b=23InTRUl3hhaj7RJRkhbDEgp3MAsNWf3VP/0KHHgyr7uGmAICdoXeW4qeDezpS9X7MAyh2 kr6/2kGK+eHLbELYZ3D1WxLBz8V91B7EHvf0x/hLI34F9nAuWT4eEJaG0NIgSquESi3/iJ C7E2JP6IWiDrhM4Ze1PdDUJkneKr4mE= Received: by mail-yw1-f173.google.com with SMTP id 00721157ae682-59f6e6b206fso20432697b3.3 for ; Mon, 25 Sep 2023 01:26:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695630419; x=1696235219; darn=kvack.org; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=u9hKZK3ZUeqzyIltnhSgdeSnOl5bDTv2bcCM7fdUw7E=; b=bAt5vta8yO328knKguWpPdUqZwuP/GSdkXsPesLSfQAKX1fKAjcgO1MCI6RcicQoq/ eKIWDz4flWrfcgRS+ksG9Tlh6iqU+dsiJLF8tE5ipFqOJoRqcSZe19LgpBOu5i689WDg OvI8xRpAJH7WnioHlZ3i3Osc3qAYMKEwxBeh3BKYDNRNCNcLqKWs9FbOThaQbSyefV/+ fYLIDiYe8tnlHj/Pje329/YevqNR1KTFUkAX7CYYWXIG/sPbJJ/ADgzC6bEEhNNOuWMO HytxmO6Fxx4FfKl6LbpAAf3g96jmlU5aiHHcX6ooEYcNQRjeDSy6WYwWq8+YHlbshdch B3qw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695630419; x=1696235219; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=u9hKZK3ZUeqzyIltnhSgdeSnOl5bDTv2bcCM7fdUw7E=; b=HNPCDIN9UxyeYexbQE0N/YSodunwCsTSxS028ByrhjSvQk1ts6O7/0bqdrjgEsgt5n rmsTJ65rdOlh+1HhUEbAByREzZhdmLq5QmAwmOViUq1HBxGASefwFr36A3jatG21FA4+ 5fBSihOKgTuZYtKnQbIDkZQvU3Zl/3LrUhBT3MfZRSdm1FNq9UVhD8MhsKC8F5ztuId3 SJ7k1moH+S5k3o3cd5RdmovJcx08RtzuYWhQQuPMwlEBZtzcEACZ3uhe6rtKIzzqGUcH dpnBwl1fMR89uK+MumUxLSf/mv8O2QmkTYoWhUnMETpTYe2epR76DTGVEsVuys+BexOH lgWw== X-Gm-Message-State: AOJu0YwkEYKVuLRNe8qc3IQco3BguzfznN9ZN+MYWIlww2fxpzdrkqK4 inVxwc7dDU5s1ghToTQsxcLbHw== X-Google-Smtp-Source: AGHT+IE5M5EgLVxUkBpaGKUdLKFQAyUDWZdjcFC3dAQ5VlTjGCz9UOlXNzgENK6A+AZ+A9Q73Se6+g== X-Received: by 2002:a0d:ef43:0:b0:57a:f72:ebf8 with SMTP id y64-20020a0def43000000b0057a0f72ebf8mr6180318ywe.28.1695630418681; Mon, 25 Sep 2023 01:26:58 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id w129-20020a817b87000000b00589e68edac6sm2307982ywc.39.2023.09.25.01.26.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Sep 2023 01:26:57 -0700 (PDT) Date: Mon, 25 Sep 2023 01:26:55 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.attlocal.net To: Andrew Morton cc: Andi Kleen , Christoph Lameter , Matthew Wilcox , Mike Kravetz , David Hildenbrand , Suren Baghdasaryan , Yang Shi , Sidhartha Kumar , Vishal Moola , Kefeng Wang , Greg Kroah-Hartman , Tejun Heo , Mel Gorman , Michal Hocko , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 05/12] mempolicy trivia: slightly more consistent naming In-Reply-To: <2d872cef-7787-a7ca-10e-9d45a64c80b4@google.com> Message-ID: <1a75d3dd-7fa-7a41-c76b-1232198a9a4a@google.com> References: <2d872cef-7787-a7ca-10e-9d45a64c80b4@google.com> MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: frwkejn8j5feuyopgjmnqf6hs9iyfnip X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: A5648C0026 X-HE-Tag: 1695630419-563249 X-HE-Meta: U2FsdGVkX191OuL9yCjjz4lkSqxg5ogSRcV1ONJATOQ7N15dXHbIltSpH+YbL4qxKOCH8RzU/kJsWvd18nExMwLOijevvFSc9kd32rVCaHdX3mLRMm/14lgFQrXrH1mRRMA+xH5u+hK2XKxNXccuUo77/Ome2QeagEKTTELLge0mqK+HgdLRwkr0d1RLCbbUkcbzHQpjp+9f/Ew1Qwlzb709StotT4jQPQ8hOK/4VdFLgcOdcF6JZDAwuWlw7dWf9JJ1TzV2j9A3SBq/9UoLe5TNn0oW/6mYGCFgWG8SftihVvShqrbBtsNCoEJWS/sQOxSFSDiqoP2lo+tz1LxCXcoFKpQJWJOCCuTSJEJaHJcoc7okpvsHYN7jIVYxfykKhWt0bXc97KUqQLq47hxFnwYbTWD+Xru6Jc0pLJO7XF6f/cLldEmqIklTGVCkzv9XcuEYzZ14CSjPyDa3Lm5fcmbMX2HlMvF/fVGpbm1dxQfqgImMWKIaAhwqgYMkoDHL/kWO67/Nv3SSbgcDrKruA6oe88jiienuq2CygfG6fzd3ZVk6PPQN5zcBkNwIiMxOYkRIuOBneh91EQ2/opvj7vXcTgUoTgg3/PgwalNx32VJJwY2R9b+3PT2fYN5d3JXfcnF2KlCUa2q0N4HBmwWD80k31ov1uvaOwPXofI+bRvNDtvxoEirUen3mEQJaZIMSuDU96pThI80x2IDKiqYRuc5tUcVrLsiroF/mAgH+2v6v11LM4D70+HWW0xEggHRJcXPeSvJ1+3Eeulh54pJ0O+2S5d9OnYWM8GWKXfDgbD2gGhaGyd9YLWCDEmUowhYRm+/SeL9N6c/3DVee9TV9R0P1p+p3h57SmVmuHCvT5/p+xzXYjvmO01UJGrXJDhB691hStvHsrw4mOObgw1rsfCezRM9RPmsyDQ2mHTShW8EF6TVbuGcm+RILJjAmGJMtEAFoSzulL0Wru2Ao57 5eW7++cB VbOMRkiaUN6/ueRd7jCqXeEkABseOjWINhw42hshqh33tqrPbCBIVAj3ygOeq+lh1gQG6PueQ0FQC01V3YXU4HwziQN8S6YtvjGFoZHFGkdiRfeZvpRsq43bU0od34V0OnQch7YLbFTFkQ6MPRcb8uDAapb+kEuN/jPadXGL1A5x/7RqvHebpSn7IkWvtb7mla9o+40j5soBAGkRvD1COjmwOdzD9fN7w2E7fX00HhuaK6rnTa1KXKCyp3QfEuE5Mkm3qWFyccX9mwYXDgyKRHmsi3mW+1cPHxFPXXozwlK4vCY02IFkMhUJhB53vkn3oCEPZ29l8/nTSc1uv7U4VPV+rtfLHDK8X5fJXkvH+CuuCg1omK1EHfTFBu7VIoLkgt/JaHXFJpYi5nRn4xchdIJV+sr9dkY9yt/PSuzxSLo2eJbNNz5QbA/A3Cr+NwzfLzWf9pfnDFMHXO4E= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Before getting down to work, do a little cleanup, mainly of inconsistent variable naming. I gave up trying to rationalize mpol versus pol versus policy, and node versus nid, but let's avoid p and nd. Remove a few superfluous blank lines, but add one; and here prefer vma->vm_policy to vma_policy(vma) - the latter being appropriate in other sources, which have to allow for !CONFIG_NUMA. That intriguing line about KERNEL_DS? should have gone in v2.6.15, when numa_policy_init() stopped using set_mempolicy(2)'s system call handler. Signed-off-by: Hugh Dickins Reviewed-by: Matthew Wilcox (Oracle) --- include/linux/mempolicy.h | 11 +++--- mm/mempolicy.c | 73 ++++++++++++++++++--------------------- 2 files changed, 38 insertions(+), 46 deletions(-) diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h index d232de7cdc56..8013d716dc46 100644 --- a/include/linux/mempolicy.h +++ b/include/linux/mempolicy.h @@ -126,10 +126,9 @@ struct shared_policy { int vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst); void mpol_shared_policy_init(struct shared_policy *sp, struct mempolicy *mpol); -int mpol_set_shared_policy(struct shared_policy *info, - struct vm_area_struct *vma, - struct mempolicy *new); -void mpol_free_shared_policy(struct shared_policy *p); +int mpol_set_shared_policy(struct shared_policy *sp, + struct vm_area_struct *vma, struct mempolicy *mpol); +void mpol_free_shared_policy(struct shared_policy *sp); struct mempolicy *mpol_shared_policy_lookup(struct shared_policy *sp, unsigned long idx); @@ -193,7 +192,7 @@ static inline bool mpol_equal(struct mempolicy *a, struct mempolicy *b) return true; } -static inline void mpol_put(struct mempolicy *p) +static inline void mpol_put(struct mempolicy *pol) { } @@ -212,7 +211,7 @@ static inline void mpol_shared_policy_init(struct shared_policy *sp, { } -static inline void mpol_free_shared_policy(struct shared_policy *p) +static inline void mpol_free_shared_policy(struct shared_policy *sp) { } diff --git a/mm/mempolicy.c b/mm/mempolicy.c index b2573921b78f..121bb490481b 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -25,7 +25,7 @@ * to the last. It would be better if bind would truly restrict * the allocation to memory nodes instead * - * preferred Try a specific node first before normal fallback. + * preferred Try a specific node first before normal fallback. * As a special case NUMA_NO_NODE here means do the allocation * on the local CPU. This is normally identical to default, * but useful to set in a VMA when you have a non default @@ -52,7 +52,7 @@ * on systems with highmem kernel lowmem allocation don't get policied. * Same with GFP_DMA allocations. * - * For shmfs/tmpfs/hugetlbfs shared memory the policy is shared between + * For shmem/tmpfs shared memory the policy is shared between * all users and remembered even when nobody has memory mapped. */ @@ -291,6 +291,7 @@ static struct mempolicy *mpol_new(unsigned short mode, unsigned short flags, return ERR_PTR(-EINVAL); } else if (nodes_empty(*nodes)) return ERR_PTR(-EINVAL); + policy = kmem_cache_alloc(policy_cache, GFP_KERNEL); if (!policy) return ERR_PTR(-ENOMEM); @@ -303,11 +304,11 @@ static struct mempolicy *mpol_new(unsigned short mode, unsigned short flags, } /* Slow path of a mpol destructor. */ -void __mpol_put(struct mempolicy *p) +void __mpol_put(struct mempolicy *pol) { - if (!atomic_dec_and_test(&p->refcnt)) + if (!atomic_dec_and_test(&pol->refcnt)) return; - kmem_cache_free(policy_cache, p); + kmem_cache_free(policy_cache, pol); } static void mpol_rebind_default(struct mempolicy *pol, const nodemask_t *nodes) @@ -364,7 +365,6 @@ static void mpol_rebind_policy(struct mempolicy *pol, const nodemask_t *newmask) * * Called with task's alloc_lock held. */ - void mpol_rebind_task(struct task_struct *tsk, const nodemask_t *new) { mpol_rebind_policy(tsk->mempolicy, new); @@ -375,7 +375,6 @@ void mpol_rebind_task(struct task_struct *tsk, const nodemask_t *new) * * Call holding a reference to mm. Takes mm->mmap_lock during call. */ - void mpol_rebind_mm(struct mm_struct *mm, nodemask_t *new) { struct vm_area_struct *vma; @@ -754,7 +753,7 @@ queue_pages_range(struct mm_struct *mm, unsigned long start, unsigned long end, * This must be called with the mmap_lock held for writing. */ static int vma_replace_policy(struct vm_area_struct *vma, - struct mempolicy *pol) + struct mempolicy *pol) { int err; struct mempolicy *old; @@ -800,7 +799,7 @@ static int mbind_range(struct vma_iterator *vmi, struct vm_area_struct *vma, vmstart = vma->vm_start; } - if (mpol_equal(vma_policy(vma), new_pol)) { + if (mpol_equal(vma->vm_policy, new_pol)) { *prev = vma; return 0; } @@ -872,18 +871,18 @@ static long do_set_mempolicy(unsigned short mode, unsigned short flags, * * Called with task's alloc_lock held */ -static void get_policy_nodemask(struct mempolicy *p, nodemask_t *nodes) +static void get_policy_nodemask(struct mempolicy *pol, nodemask_t *nodes) { nodes_clear(*nodes); - if (p == &default_policy) + if (pol == &default_policy) return; - switch (p->mode) { + switch (pol->mode) { case MPOL_BIND: case MPOL_INTERLEAVE: case MPOL_PREFERRED: case MPOL_PREFERRED_MANY: - *nodes = p->nodes; + *nodes = pol->nodes; break; case MPOL_LOCAL: /* return empty node mask for local allocation */ @@ -1649,7 +1648,6 @@ static int kernel_migrate_pages(pid_t pid, unsigned long maxnode, out_put: put_task_struct(task); goto out; - } SYSCALL_DEFINE4(migrate_pages, pid_t, pid, unsigned long, maxnode, @@ -1659,7 +1657,6 @@ SYSCALL_DEFINE4(migrate_pages, pid_t, pid, unsigned long, maxnode, return kernel_migrate_pages(pid, maxnode, old_nodes, new_nodes); } - /* Retrieve NUMA policy */ static int kernel_get_mempolicy(int __user *policy, unsigned long __user *nmask, @@ -1842,10 +1839,10 @@ nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *policy) * policy_node() is always coupled with policy_nodemask(), which * secures the nodemask limit for 'bind' and 'prefer-many' policy. */ -static int policy_node(gfp_t gfp, struct mempolicy *policy, int nd) +static int policy_node(gfp_t gfp, struct mempolicy *policy, int nid) { if (policy->mode == MPOL_PREFERRED) { - nd = first_node(policy->nodes); + nid = first_node(policy->nodes); } else { /* * __GFP_THISNODE shouldn't even be used with the bind policy @@ -1860,19 +1857,18 @@ static int policy_node(gfp_t gfp, struct mempolicy *policy, int nd) policy->home_node != NUMA_NO_NODE) return policy->home_node; - return nd; + return nid; } /* Do dynamic interleaving for a process */ -static unsigned interleave_nodes(struct mempolicy *policy) +static unsigned int interleave_nodes(struct mempolicy *policy) { - unsigned next; - struct task_struct *me = current; + unsigned int nid; - next = next_node_in(me->il_prev, policy->nodes); - if (next < MAX_NUMNODES) - me->il_prev = next; - return next; + nid = next_node_in(current->il_prev, policy->nodes); + if (nid < MAX_NUMNODES) + current->il_prev = nid; + return nid; } /* @@ -2362,7 +2358,7 @@ unsigned long alloc_pages_bulk_array_mempolicy(gfp_t gfp, int vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst) { - struct mempolicy *pol = mpol_dup(vma_policy(src)); + struct mempolicy *pol = mpol_dup(src->vm_policy); if (IS_ERR(pol)) return PTR_ERR(pol); @@ -2784,40 +2780,40 @@ void mpol_shared_policy_init(struct shared_policy *sp, struct mempolicy *mpol) } } -int mpol_set_shared_policy(struct shared_policy *info, - struct vm_area_struct *vma, struct mempolicy *npol) +int mpol_set_shared_policy(struct shared_policy *sp, + struct vm_area_struct *vma, struct mempolicy *pol) { int err; struct sp_node *new = NULL; unsigned long sz = vma_pages(vma); - if (npol) { - new = sp_alloc(vma->vm_pgoff, vma->vm_pgoff + sz, npol); + if (pol) { + new = sp_alloc(vma->vm_pgoff, vma->vm_pgoff + sz, pol); if (!new) return -ENOMEM; } - err = shared_policy_replace(info, vma->vm_pgoff, vma->vm_pgoff+sz, new); + err = shared_policy_replace(sp, vma->vm_pgoff, vma->vm_pgoff + sz, new); if (err && new) sp_free(new); return err; } /* Free a backing policy store on inode delete. */ -void mpol_free_shared_policy(struct shared_policy *p) +void mpol_free_shared_policy(struct shared_policy *sp) { struct sp_node *n; struct rb_node *next; - if (!p->root.rb_node) + if (!sp->root.rb_node) return; - write_lock(&p->lock); - next = rb_first(&p->root); + write_lock(&sp->lock); + next = rb_first(&sp->root); while (next) { n = rb_entry(next, struct sp_node, nd); next = rb_next(&n->nd); - sp_delete(p, n); + sp_delete(sp, n); } - write_unlock(&p->lock); + write_unlock(&sp->lock); } #ifdef CONFIG_NUMA_BALANCING @@ -2867,7 +2863,6 @@ static inline void __init check_numabalancing_enable(void) } #endif /* CONFIG_NUMA_BALANCING */ -/* assumes fs == KERNEL_DS */ void __init numa_policy_init(void) { nodemask_t interleave_nodes; @@ -2930,7 +2925,6 @@ void numa_default_policy(void) /* * Parse and format mempolicy from/to strings */ - static const char * const policy_modes[] = { [MPOL_DEFAULT] = "default", @@ -2941,7 +2935,6 @@ static const char * const policy_modes[] = [MPOL_PREFERRED_MANY] = "prefer (many)", }; - #ifdef CONFIG_TMPFS /** * mpol_parse_str - parse string to mempolicy, for tmpfs mpol mount option.