From patchwork Sun Dec 4 16:14:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhongkun He X-Patchwork-Id: 13063883 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0E5F9C3A5A7 for ; Sun, 4 Dec 2022 16:14:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 45FA18E0003; Sun, 4 Dec 2022 11:14:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 40F128E0001; Sun, 4 Dec 2022 11:14:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2B0138E0003; Sun, 4 Dec 2022 11:14:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 1C04D8E0001 for ; Sun, 4 Dec 2022 11:14:49 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id DE3C11A04A3 for ; Sun, 4 Dec 2022 16:14:48 +0000 (UTC) X-FDA: 80205122256.22.5AC74B4 Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) by imf07.hostedemail.com (Postfix) with ESMTP id 65E6240012 for ; Sun, 4 Dec 2022 16:14:48 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=DN9268v2; spf=pass (imf07.hostedemail.com: domain of hezhongkun.hzk@bytedance.com designates 209.85.214.176 as permitted sender) smtp.mailfrom=hezhongkun.hzk@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1670170488; a=rsa-sha256; cv=none; b=lYKBPYqxgh9sjDyhAiGdTaA7VcUm5OSh4GUSYVqikxiJsF5I2q+jZtOEtn6fx9HFgLX+fx z5OgA7zm+epWUSzsNKYEDbznSYxam2AtdiuiKHu5/m57Is5UnPDyKycqCzOnq2DwxrU7j7 xUVaA0xQummBB0i/DhojQk1B7JFuwDw= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=DN9268v2; spf=pass (imf07.hostedemail.com: domain of hezhongkun.hzk@bytedance.com designates 209.85.214.176 as permitted sender) smtp.mailfrom=hezhongkun.hzk@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1670170488; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=kCicEi7EuklgtEvCZ5T+k9chhbYeMI1fhQwXDoR5e2w=; b=Y5yN2enA1KDQGSSj2EAzh0C5rTqScXJ+QEuOcN8OE08tv0gAM+t6vd2BHnNvrNlcws9uLP HkSB8FJgbEo/nF2flm7enw/sDByqvhY1Vm9voGRKhwkURW+K5kcdCnsbJWbltXlds8wrDH H5da/3JDVfRIwbCpDrU1sgRm2hHltXo= Received: by mail-pl1-f176.google.com with SMTP id p24so8787662plw.1 for ; Sun, 04 Dec 2022 08:14:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kCicEi7EuklgtEvCZ5T+k9chhbYeMI1fhQwXDoR5e2w=; b=DN9268v2i3+YlCBpBr/u+qAFDKpY0ZPHZx2wfNv72EIZ3pz9YkMHTJLvBSDMkuUCaC D+jRP0AHeKOdXlpQN/h24Li3REEESHkiPS8mBrCAxV3Le447Tm9PqSTcS+jrW1ub8EOV g9yOr4RebeIRNyICOezYtiJ+JsT+RqSL3GcPYZ4FSxpxdtWPij2bM4mlYuUmLjYJACBj 1KlAp0/qH4VcdAFwx0TOYD5ylJalVr84fLlz2dTC/izxghpZN5xf6LAVWQwiQuZ0Iua5 k5Mpnxz4xpGYOWQPUnNBeOCGtw0LtOsRB8icvfzA3oeDlGHX9QgRL8xgmvPvsLhY/3H1 niVw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kCicEi7EuklgtEvCZ5T+k9chhbYeMI1fhQwXDoR5e2w=; b=yV3z1VhLx6Q/2yYPhzRa2bTy8Z0egAkDl+Ah5Y48IFREkIPlMiCiYPoIRqSwV3tHTI 7jhN/+DMBIhKtApNnp9rsqMznlLBj99iVhPVD9fwGZzCSnkyfjG6Y4+HLddO7czb4gA+ YDP2zXlv0HmrdCpF2j6r4+oiiFw/dtuOkn8e6ZDZfkYO9V56bod+VfKNdZzUfYBGbfyA ewVd+4CRvxDE9+ftMxmqVRMabYeMlLo09VUGgFOMGosqrhNSY8/5LCSmx/JEUBHdBGH6 A6aiZGc5kG+TNHMGcXvGLSFH8HqvXjqEv5mAeHsPbBpfV9PT10D4joJ8SYVh8XSF8HJO yxtw== X-Gm-Message-State: ANoB5pnoEzR62KV21F7iKWRoxO2h0dWQA4SMyb7MEsD2OIBiuwtP+tp2 6hOsSyZbCTvjns+JvVixIR46kA== X-Google-Smtp-Source: AA0mqf4RTokf5jbtgEXm4IxyiK0LlJQVlE4iuPUaj9KIK0LuPxp0yXJ2jh/wtRqe576LVRVkqoPAhg== X-Received: by 2002:a17:90a:ae01:b0:213:e8b5:2d50 with SMTP id t1-20020a17090aae0100b00213e8b52d50mr85553925pjq.211.1670170487204; Sun, 04 Dec 2022 08:14:47 -0800 (PST) Received: from Tower.bytedance.net ([139.177.225.248]) by smtp.gmail.com with ESMTPSA id n16-20020a170903111000b0016cf3f124e1sm9000323plh.234.2022.12.04.08.14.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Dec 2022 08:14:46 -0800 (PST) From: Zhongkun He To: mhocko@suse.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, wuyun.abel@bytedance.com, Zhongkun He Subject: [PATCH 1/3] mm: replace mpol_put() with mpol_kill() initiated the destruction of mpol. Date: Mon, 5 Dec 2022 00:14:30 +0800 Message-Id: <20221204161432.2149375-2-hezhongkun.hzk@bytedance.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221204161432.2149375-1-hezhongkun.hzk@bytedance.com> References: <20221204161432.2149375-1-hezhongkun.hzk@bytedance.com> MIME-Version: 1.0 X-Spamd-Result: default: False [1.60 / 9.00]; BAYES_HAM(-6.00)[100.00%]; SORBS_IRL_BL(3.00)[209.85.214.176:from]; R_MISSING_CHARSET(2.50)[]; MID_CONTAINS_FROM(1.00)[]; SUBJECT_HAS_UNDERSCORES(1.00)[]; RCVD_NO_TLS_LAST(0.10)[]; MIME_GOOD(-0.10)[text/plain]; BAD_REP_POLICIES(0.10)[]; TO_DN_SOME(0.00)[]; ARC_NA(0.00)[]; RCPT_COUNT_SEVEN(0.00)[7]; R_DKIM_ALLOW(0.00)[bytedance-com.20210112.gappssmtp.com:s=20210112]; MIME_TRACE(0.00)[0:+]; FROM_EQ_ENVFROM(0.00)[]; R_SPF_ALLOW(0.00)[+ip4:209.85.128.0/17]; DMARC_POLICY_ALLOW(0.00)[bytedance.com,none]; TO_MATCH_ENVRCPT_SOME(0.00)[]; PREVIOUSLY_DELIVERED(0.00)[linux-mm@kvack.org]; DKIM_TRACE(0.00)[bytedance-com.20210112.gappssmtp.com:+]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; RCVD_COUNT_THREE(0.00)[3]; FROM_HAS_DN(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[] X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 65E6240012 X-Stat-Signature: n6d5wnf6eq61difwd9eshixk4rnb45fs X-HE-Tag: 1670170488-631676 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: mpol_kill() is used to initiate destruction of mempolicy. so it is called to free the mempolicy. The pol_put() just decrement the local counter. Suggested-by: Michal Hocko Signed-off-by: Zhongkun He --- fs/hugetlbfs/inode.c | 2 +- fs/proc/task_mmu.c | 3 +-- kernel/fork.c | 4 ++-- mm/hugetlb.c | 6 +++--- mm/mempolicy.c | 42 +++++++++++++++++++++--------------------- mm/mmap.c | 10 +++++----- mm/shmem.c | 10 +++++----- 7 files changed, 38 insertions(+), 39 deletions(-) diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c index dd54f67e47fd..bad1b07f8653 100644 --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -93,7 +93,7 @@ static inline void hugetlb_set_vma_policy(struct vm_area_struct *vma, static inline void hugetlb_drop_vma_policy(struct vm_area_struct *vma) { - mpol_cond_put(vma->vm_policy); + mpol_put(vma->vm_policy); } #else static inline void hugetlb_set_vma_policy(struct vm_area_struct *vma, diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 8a74cdcc9af0..24aac42428b3 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -107,7 +107,6 @@ static void hold_task_mempolicy(struct proc_maps_private *priv) task_lock(task); priv->task_mempolicy = get_task_policy(task); - mpol_get(priv->task_mempolicy); task_unlock(task); } static void release_task_mempolicy(struct proc_maps_private *priv) @@ -1949,7 +1948,7 @@ static int show_numa_map(struct seq_file *m, void *v) pol = __get_vma_policy(vma, vma->vm_start); if (pol) { mpol_to_str(buffer, sizeof(buffer), pol); - mpol_cond_put(pol); + mpol_put(pol); } else { mpol_to_str(buffer, sizeof(buffer), proc_priv->task_mempolicy); } diff --git a/kernel/fork.c b/kernel/fork.c index 08969f5aa38d..97ba127a1b89 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -712,7 +712,7 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm, fail_nomem_mas_store: unlink_anon_vmas(tmp); fail_nomem_anon_vma_fork: - mpol_put(vma_policy(tmp)); + mpol_kill(vma_policy(tmp)); fail_nomem_policy: vm_area_free(tmp); fail_nomem: @@ -2537,7 +2537,7 @@ static __latent_entropy struct task_struct *copy_process( bad_fork_cleanup_policy: lockdep_free_task(p); #ifdef CONFIG_NUMA - mpol_put(p->mempolicy); + mpol_kill(p->mempolicy); #endif bad_fork_cleanup_delayacct: delayacct_tsk_free(p); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 546df97c31e4..277330f40818 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1246,7 +1246,7 @@ static struct page *dequeue_huge_page_vma(struct hstate *h, h->resv_huge_pages--; } - mpol_cond_put(mpol); + mpol_put(mpol); return page; err: @@ -2315,7 +2315,7 @@ struct page *alloc_buddy_huge_page_with_mpol(struct hstate *h, if (!page) page = alloc_surplus_huge_page(h, gfp_mask, nid, nodemask); - mpol_cond_put(mpol); + mpol_put(mpol); return page; } @@ -2351,7 +2351,7 @@ struct page *alloc_huge_page_vma(struct hstate *h, struct vm_area_struct *vma, gfp_mask = htlb_alloc_mask(h); node = huge_node(vma, address, gfp_mask, &mpol, &nodemask); page = alloc_huge_page_nodemask(h, node, nodemask, gfp_mask); - mpol_cond_put(mpol); + mpol_put(mpol); return page; } diff --git a/mm/mempolicy.c b/mm/mempolicy.c index ee3e2ed5ef07..f1857ebded46 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -795,11 +795,11 @@ static int vma_replace_policy(struct vm_area_struct *vma, old = vma->vm_policy; vma->vm_policy = new; /* protected by mmap_lock */ - mpol_put(old); + mpol_kill(old); return 0; err_out: - mpol_put(new); + mpol_kill(new); return err; } @@ -890,7 +890,7 @@ static long do_set_mempolicy(unsigned short mode, unsigned short flags, ret = mpol_set_nodemask(new, nodes, scratch); if (ret) { task_unlock(current); - mpol_put(new); + mpol_kill(new); goto out; } @@ -899,7 +899,7 @@ static long do_set_mempolicy(unsigned short mode, unsigned short flags, if (new && new->mode == MPOL_INTERLEAVE) current->il_prev = MAX_NUMNODES-1; task_unlock(current); - mpol_put(old); + mpol_kill(old); ret = 0; out: NODEMASK_SCRATCH_FREE(scratch); @@ -925,8 +925,7 @@ static void get_policy_nodemask(struct mempolicy *p, nodemask_t *nodes) *nodes = p->nodes; break; case MPOL_LOCAL: - /* return empty node mask for local allocation */ - break; + /* return empty node mask for local allocation */killbreak; default: BUG(); } @@ -1370,7 +1369,7 @@ static long do_mbind(unsigned long start, unsigned long len, mmap_write_unlock(mm); mpol_out: - mpol_put(new); + mpol_kill(new); if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) lru_cache_enable(); return err; @@ -1566,7 +1565,7 @@ SYSCALL_DEFINE4(set_mempolicy_home_node, unsigned long, start, unsigned long, le new->home_node = home_node; err = mbind_range(mm, vmstart, vmend, new); - mpol_put(new); + mpol_kill(new); if (err) break; } @@ -1813,14 +1812,13 @@ static struct mempolicy *get_vma_policy(struct vm_area_struct *vma, bool vma_policy_mof(struct vm_area_struct *vma) { struct mempolicy *pol; + bool ret = false; if (vma->vm_ops && vma->vm_ops->get_policy) { - bool ret = false; - pol = vma->vm_ops->get_policy(vma, vma->vm_start); if (pol && (pol->flags & MPOL_F_MOF)) ret = true; - mpol_cond_put(pol); + mpol_put(pol); return ret; } @@ -1828,8 +1826,9 @@ bool vma_policy_mof(struct vm_area_struct *vma) pol = vma->vm_policy; if (!pol) pol = get_task_policy(current); + mpol_put(pol); - return pol->flags & MPOL_F_MOF; + return ret; } bool apply_policy_zone(struct mempolicy *policy, enum zone_type zone) @@ -2193,7 +2192,7 @@ struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma, unsigned nid; nid = interleave_nid(pol, vma, addr, PAGE_SHIFT + order); - mpol_cond_put(pol); + mpol_put(pol); gfp |= __GFP_COMP; page = alloc_page_interleave(gfp, order, nid); if (page && order > 1) @@ -2208,7 +2207,7 @@ struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma, node = policy_node(gfp, pol, node); gfp |= __GFP_COMP; page = alloc_pages_preferred_many(gfp, order, node, pol); - mpol_cond_put(pol); + mpol_put(pol); if (page && order > 1) prep_transhuge_page(page); folio = (struct folio *)page; @@ -2233,7 +2232,7 @@ struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma, nmask = policy_nodemask(gfp, pol); if (!nmask || node_isset(hpage_node, *nmask)) { - mpol_cond_put(pol); + mpol_put(pol); /* * First, try to allocate THP only on local node, but * don't reclaim unnecessarily, just compact. @@ -2258,7 +2257,7 @@ struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma, nmask = policy_nodemask(gfp, pol); preferred_nid = policy_node(gfp, pol, node); folio = __folio_alloc(gfp, order, preferred_nid, nmask); - mpol_cond_put(pol); + mpol_put(pol); out: return folio; } @@ -2300,6 +2299,7 @@ struct page *alloc_pages(gfp_t gfp, unsigned order) policy_node(gfp, pol, numa_node_id()), policy_nodemask(gfp, pol)); + mpol_put(pol); return page; } EXPORT_SYMBOL(alloc_pages); @@ -2566,7 +2566,7 @@ mpol_shared_policy_lookup(struct shared_policy *sp, unsigned long idx) static void sp_free(struct sp_node *n) { - mpol_put(n->policy); + mpol_kill(n->policy); kmem_cache_free(sn_cache, n); } @@ -2655,7 +2655,7 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long if (curnid != polnid) ret = polnid; out: - mpol_cond_put(pol); + mpol_put(pol); return ret; } @@ -2674,7 +2674,7 @@ void mpol_put_task_policy(struct task_struct *task) pol = task->mempolicy; task->mempolicy = NULL; task_unlock(task); - mpol_put(pol); + mpol_kill(pol); } static void sp_delete(struct shared_policy *sp, struct sp_node *n) @@ -2763,7 +2763,7 @@ static int shared_policy_replace(struct shared_policy *sp, unsigned long start, err_out: if (mpol_new) - mpol_put(mpol_new); + mpol_kill(mpol_new); if (n_new) kmem_cache_free(sn_cache, n_new); @@ -2823,7 +2823,7 @@ void mpol_shared_policy_init(struct shared_policy *sp, struct mempolicy *mpol) mpol_set_shared_policy(sp, &pvma, new); /* adds ref */ put_new: - mpol_put(new); /* drop initial ref */ + mpol_kill(new); /* drop initial ref */ free_scratch: NODEMASK_SCRATCH_FREE(scratch); put_mpol: diff --git a/mm/mmap.c b/mm/mmap.c index 2def55555e05..7bf785463499 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -140,7 +140,7 @@ static void remove_vma(struct vm_area_struct *vma) vma->vm_ops->close(vma); if (vma->vm_file) fput(vma->vm_file); - mpol_put(vma_policy(vma)); + mpol_kill(vma_policy(vma)); vm_area_free(vma); } @@ -595,7 +595,7 @@ inline int vma_expand(struct ma_state *mas, struct vm_area_struct *vma, if (next->anon_vma) anon_vma_merge(vma, next); mm->map_count--; - mpol_put(vma_policy(next)); + mpol_kill(vma_policy(next)); vm_area_free(next); } @@ -836,7 +836,7 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start, if (next->anon_vma) anon_vma_merge(vma, next); mm->map_count--; - mpol_put(vma_policy(next)); + mpol_kill(vma_policy(next)); if (remove_next != 2) BUG_ON(vma->vm_end < next->vm_end); vm_area_free(next); @@ -2253,7 +2253,7 @@ int __split_vma(struct mm_struct *mm, struct vm_area_struct *vma, fput(new->vm_file); unlink_anon_vmas(new); out_free_mpol: - mpol_put(vma_policy(new)); + mpol_kill(vma_policy(new)); out_free_vma: vm_area_free(new); validate_mm_mt(mm); @@ -3246,7 +3246,7 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap, unlink_anon_vmas(new_vma); out_free_mempol: - mpol_put(vma_policy(new_vma)); + mpol_kill(vma_policy(new_vma)); out_free_vma: vm_area_free(new_vma); out: diff --git a/mm/shmem.c b/mm/shmem.c index c1d8b8a1aa3b..11e57d79c104 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1485,7 +1485,7 @@ static void shmem_pseudo_vma_init(struct vm_area_struct *vma, static void shmem_pseudo_vma_destroy(struct vm_area_struct *vma) { /* Drop reference taken by mpol_shared_policy_lookup() */ - mpol_cond_put(vma->vm_policy); + mpol_put(vma->vm_policy); } static struct folio *shmem_swapin(swp_entry_t swap, gfp_t gfp, @@ -3528,7 +3528,7 @@ static int shmem_parse_one(struct fs_context *fc, struct fs_parameter *param) break; case Opt_mpol: if (IS_ENABLED(CONFIG_NUMA)) { - mpol_put(ctx->mpol); + mpol_kill(ctx->mpol); ctx->mpol = NULL; if (mpol_parse_str(param->string, &ctx->mpol)) goto bad_value; @@ -3666,7 +3666,7 @@ static int shmem_reconfigure(struct fs_context *fc) ctx->mpol = NULL; } raw_spin_unlock(&sbinfo->stat_lock); - mpol_put(mpol); + mpol_kill(mpol); return 0; out: raw_spin_unlock(&sbinfo->stat_lock); @@ -3730,7 +3730,7 @@ static void shmem_put_super(struct super_block *sb) free_percpu(sbinfo->ino_batch); percpu_counter_destroy(&sbinfo->used_blocks); - mpol_put(sbinfo->mpol); + mpol_kill(sbinfo->mpol); kfree(sbinfo); sb->s_fs_info = NULL; } @@ -3830,7 +3830,7 @@ static void shmem_free_fc(struct fs_context *fc) struct shmem_options *ctx = fc->fs_private; if (ctx) { - mpol_put(ctx->mpol); + mpol_kill(ctx->mpol); kfree(ctx); } } From patchwork Sun Dec 4 16:14:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhongkun He X-Patchwork-Id: 13063884 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9C78BC4321E for ; Sun, 4 Dec 2022 16:15:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DFADC8E0005; Sun, 4 Dec 2022 11:15:09 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DAC148E0001; Sun, 4 Dec 2022 11:15:09 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C4C638E0005; Sun, 4 Dec 2022 11:15:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id B701B8E0001 for ; Sun, 4 Dec 2022 11:15:09 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 8FF94120C46 for ; Sun, 4 Dec 2022 16:15:09 +0000 (UTC) X-FDA: 80205123138.17.22506D6 Received: from mail-pj1-f43.google.com (mail-pj1-f43.google.com [209.85.216.43]) by imf07.hostedemail.com (Postfix) with ESMTP id 2745E40013 for ; Sun, 4 Dec 2022 16:15:08 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=D9HiGoEw; spf=pass (imf07.hostedemail.com: domain of hezhongkun.hzk@bytedance.com designates 209.85.216.43 as permitted sender) smtp.mailfrom=hezhongkun.hzk@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1670170509; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=qRQapFdwTnMNn2b4Tmu2pfTwu3sgopbJ/w2CVO4ajW0=; b=SiEQhuOS3Cz5pl2F+L7l0VMhq1PuuY+TIRXcGmIm17r5k+KgmmgiOvY4FZSDljCBqC0W2V xD0ktujk6GBDQyCx5YJNpod6aKc67D2TwTxok1Q1CXyGQfSwbYU3lUnrH6F3dLPJFkqTcU bqjXlVhY0ECsA9gaoK+YMk9uV0lthm0= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=D9HiGoEw; spf=pass (imf07.hostedemail.com: domain of hezhongkun.hzk@bytedance.com designates 209.85.216.43 as permitted sender) smtp.mailfrom=hezhongkun.hzk@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1670170509; a=rsa-sha256; cv=none; b=fyhhwoq5AC4o3AqwDuPqEKc6O5NAstr8kYGxqVQdknq+QLZr6vuNu6asE6WiQ/KrB9HUSB CBFi2neemShDJQAoSThKqsUz/IzGahg2atNv15cVlp05t08CyN5ACYQ/Efrv7PdP454frR SUNJXJ0FI3GF66RF8aA5QsLf+nx9gk0= Received: by mail-pj1-f43.google.com with SMTP id q15so8068738pja.0 for ; Sun, 04 Dec 2022 08:15:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qRQapFdwTnMNn2b4Tmu2pfTwu3sgopbJ/w2CVO4ajW0=; b=D9HiGoEwhwlbDf046jFmOoT756+CfK+J0b38Sr1aPx3LWGaU3r2Spfkj7pxwdkOXmT vgzdlF/ruF1wjnLwFfVTBfI0RaGI/rJ66k9cuNBLzwVMTrlAbZg4GGRD0hkcq+TyUVOw /x9ashaKx4dwa0/EqZH3RjK1GYm6ogmjAXX2tnIRx57KNpmkSGJRx4qzN0P3IuMJ6MNi 5UqnA99cznD1QP3g6CSDnGSbsFaxQxJ8s5GH8lnFiv0fB45EzuenHpzYY7zg56XNR91/ ClkwTE9zWUXRn7uFZu3s4sxCnPnmTexxlnjlZ83A5000Tw+4jbEjDo6pG5hrKmBOtyyG Uvaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qRQapFdwTnMNn2b4Tmu2pfTwu3sgopbJ/w2CVO4ajW0=; b=A/a1Y/E9t/1LAus6WP7gFb6vYiJCUgxQMzL/Jb3C8MetrLdVAJNv6DmmEKVzigvDjW 2DTZ1SIZES4K3jGKUOExbPKPhoSNtk2v11SEQALr6zpQqA8+jAidyS7oG+GdA1zvsFMa UsQOAIj0BdBh/wcp3kYpgd/iDlUT1Wkaew0YctLi4j5JRrvCkztoDcA2vP5vunwsAP64 kgcPaYZIkxHNUOsu/B9mvY8805EYQSJE8nRQtX3dx5JrgGP49eDVYoLZN8Uqu9GZXCf4 dj5YS2ijq3YJ3v0nGEv10yFyAlI9D1/jAtDy2+AtwaxiuCHJNKm4hMHbOi8rBPfJRD25 r3dg== X-Gm-Message-State: ANoB5pkPwRcBmxta58UMXLH4/xuRdjeFtdVQwumDTs1IuT4Ck8I+s0Ak /4HNUN7joBqGBjc4hZx16A6reg== X-Google-Smtp-Source: AA0mqf5bbj9b9IunN6RcDM3rBOrdn1vqyKhHx++JAOfkgLrnrCTISkUgeF/xk0d8pZG3YyH/vfnLkQ== X-Received: by 2002:a17:902:7d94:b0:188:f0e1:ef42 with SMTP id a20-20020a1709027d9400b00188f0e1ef42mr64237242plm.166.1670170508210; Sun, 04 Dec 2022 08:15:08 -0800 (PST) Received: from Tower.bytedance.net ([139.177.225.248]) by smtp.gmail.com with ESMTPSA id n16-20020a170903111000b0016cf3f124e1sm9000323plh.234.2022.12.04.08.15.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Dec 2022 08:15:07 -0800 (PST) From: Zhongkun He To: mhocko@suse.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, wuyun.abel@bytedance.com, Zhongkun He Subject: [PATCH 2/3] mm: fix the reference of mempolicy in some functions. Date: Mon, 5 Dec 2022 00:14:31 +0800 Message-Id: <20221204161432.2149375-3-hezhongkun.hzk@bytedance.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221204161432.2149375-1-hezhongkun.hzk@bytedance.com> References: <20221204161432.2149375-1-hezhongkun.hzk@bytedance.com> MIME-Version: 1.0 X-Spamd-Result: default: False [0.77 / 9.00]; BAYES_HAM(-5.83)[99.64%]; SORBS_IRL_BL(3.00)[209.85.216.43:from]; R_MISSING_CHARSET(2.50)[]; MID_CONTAINS_FROM(1.00)[]; MIME_GOOD(-0.10)[text/plain]; RCVD_NO_TLS_LAST(0.10)[]; BAD_REP_POLICIES(0.10)[]; TO_DN_SOME(0.00)[]; R_DKIM_ALLOW(0.00)[bytedance-com.20210112.gappssmtp.com:s=20210112]; RCPT_COUNT_SEVEN(0.00)[7]; ARC_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; FROM_EQ_ENVFROM(0.00)[]; R_SPF_ALLOW(0.00)[+ip4:209.85.128.0/17]; DMARC_POLICY_ALLOW(0.00)[bytedance.com,none]; TO_MATCH_ENVRCPT_SOME(0.00)[]; PREVIOUSLY_DELIVERED(0.00)[linux-mm@kvack.org]; DKIM_TRACE(0.00)[bytedance-com.20210112.gappssmtp.com:+]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; RCVD_COUNT_THREE(0.00)[3]; FROM_HAS_DN(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[] X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 2745E40013 X-Stat-Signature: 5yk3ep97f4swhnoknr11f55fukhpbbw8 X-HE-Tag: 1670170508-446963 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There are some functions that use mempolicy in process context, but don't reference it. Let's fix it to have a clear life time model. Suggested-by: Michal Hocko Signed-off-by: Zhongkun He --- mm/hugetlb.c | 16 ++++++----- mm/mempolicy.c | 78 +++++++++++++++++++++++++------------------------- 2 files changed, 48 insertions(+), 46 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 277330f40818..0c2b5233e0c9 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -4353,19 +4353,19 @@ static int __init default_hugepagesz_setup(char *s) } __setup("default_hugepagesz=", default_hugepagesz_setup); -static nodemask_t *policy_mbind_nodemask(gfp_t gfp) +static nodemask_t *policy_mbind_nodemask(gfp_t gfp, struct mempolicy **mpol) { #ifdef CONFIG_NUMA - struct mempolicy *mpol = get_task_policy(current); + *mpol = get_task_policy(current); /* * Only enforce MPOL_BIND policy which overlaps with cpuset policy * (from policy_nodemask) specifically for hugetlb case */ - if (mpol->mode == MPOL_BIND && - (apply_policy_zone(mpol, gfp_zone(gfp)) && - cpuset_nodemask_valid_mems_allowed(&mpol->nodes))) - return &mpol->nodes; + if ((*mpol)->mode == MPOL_BIND && + (apply_policy_zone(*mpol, gfp_zone(gfp)) && + cpuset_nodemask_valid_mems_allowed(&(*mpol)->nodes))) + return &(*mpol)->nodes; #endif return NULL; } @@ -4375,14 +4375,16 @@ static unsigned int allowed_mems_nr(struct hstate *h) int node; unsigned int nr = 0; nodemask_t *mbind_nodemask; + struct mempolicy *mpol = NULL; unsigned int *array = h->free_huge_pages_node; gfp_t gfp_mask = htlb_alloc_mask(h); - mbind_nodemask = policy_mbind_nodemask(gfp_mask); + mbind_nodemask = policy_mbind_nodemask(gfp_mask, &mpol); for_each_node_mask(node, cpuset_current_mems_allowed) { if (!mbind_nodemask || node_isset(node, *mbind_nodemask)) nr += array[node]; } + mpol_put(mpol); return nr; } diff --git a/mm/mempolicy.c b/mm/mempolicy.c index f1857ebded46..0feffb7ff01e 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -159,7 +159,7 @@ int numa_map_to_online_node(int node) EXPORT_SYMBOL_GPL(numa_map_to_online_node); /* Obtain a reference on the specified task mempolicy */ -static mempolicy *get_task_mpol(struct task_struct *p) +static struct mempolicy *get_task_mpol(struct task_struct *p) { struct mempolicy *pol; @@ -925,7 +925,8 @@ static void get_policy_nodemask(struct mempolicy *p, nodemask_t *nodes) *nodes = p->nodes; break; case MPOL_LOCAL: - /* return empty node mask for local allocation */killbreak; + /* return empty node mask for local allocation */ + break; default: BUG(); } @@ -951,7 +952,7 @@ static long do_get_mempolicy(int *policy, nodemask_t *nmask, int err; struct mm_struct *mm = current->mm; struct vm_area_struct *vma = NULL; - struct mempolicy *pol = current->mempolicy, *pol_refcount = NULL; + struct mempolicy *pol; if (flags & ~(unsigned long)(MPOL_F_NODE|MPOL_F_ADDR|MPOL_F_MEMS_ALLOWED)) @@ -966,8 +967,10 @@ static long do_get_mempolicy(int *policy, nodemask_t *nmask, task_unlock(current); return 0; } + pol = get_task_mpol(current); if (flags & MPOL_F_ADDR) { + mpol_put(pol); /* put the refcount of task mpol */ /* * Do NOT fall back to task policy if the * vma/shared policy at addr is NULL. We @@ -979,27 +982,19 @@ static long do_get_mempolicy(int *policy, nodemask_t *nmask, mmap_read_unlock(mm); return -EFAULT; } - if (vma->vm_ops && vma->vm_ops->get_policy) - pol = vma->vm_ops->get_policy(vma, addr); - else - pol = vma->vm_policy; - } else if (addr) - return -EINVAL; + /* obtain a reference to vma mpol. */ + pol = __get_vma_policy(vma, addr); + mmap_read_unlock(mm); + } else if (addr) { + err = -EINVAL; + goto out; + } if (!pol) pol = &default_policy; /* indicates default behavior */ if (flags & MPOL_F_NODE) { if (flags & MPOL_F_ADDR) { - /* - * Take a refcount on the mpol, because we are about to - * drop the mmap_lock, after which only "pol" remains - * valid, "vma" is stale. - */ - pol_refcount = pol; - vma = NULL; - mpol_get(pol); - mmap_read_unlock(mm); err = lookup_node(mm, addr); if (err < 0) goto out; @@ -1023,21 +1018,19 @@ static long do_get_mempolicy(int *policy, nodemask_t *nmask, err = 0; if (nmask) { - if (mpol_store_user_nodemask(pol)) { + /* + * There is no need for a lock, since we get + * a reference to mpol. + */ + if (mpol_store_user_nodemask(pol)) *nmask = pol->w.user_nodemask; - } else { - task_lock(current); + else get_policy_nodemask(pol, nmask); - task_unlock(current); - } } out: - mpol_cond_put(pol); - if (vma) - mmap_read_unlock(mm); - if (pol_refcount) - mpol_put(pol_refcount); + if (pol != &default_policy) + mpol_put(pol); return err; } @@ -1923,16 +1916,18 @@ unsigned int mempolicy_slab_node(void) if (!in_task()) return node; - policy = current->mempolicy; + policy = get_task_mpol(current); if (!policy) return node; switch (policy->mode) { case MPOL_PREFERRED: - return first_node(policy->nodes); + node = first_node(policy->nodes); + break; case MPOL_INTERLEAVE: - return interleave_nodes(policy); + node = interleave_nodes(policy); + break; case MPOL_BIND: case MPOL_PREFERRED_MANY: @@ -1948,14 +1943,17 @@ unsigned int mempolicy_slab_node(void) zonelist = &NODE_DATA(node)->node_zonelists[ZONELIST_FALLBACK]; z = first_zones_zonelist(zonelist, highest_zoneidx, &policy->nodes); - return z->zone ? zone_to_nid(z->zone) : node; + node = z->zone ? zone_to_nid(z->zone) : node; + break; } case MPOL_LOCAL: - return node; + break; default: BUG(); } + mpol_put(policy); + return node; } /* @@ -2379,21 +2377,23 @@ unsigned long alloc_pages_bulk_array_mempolicy(gfp_t gfp, unsigned long nr_pages, struct page **page_array) { struct mempolicy *pol = &default_policy; + unsigned long pages; if (!in_interrupt() && !(gfp & __GFP_THISNODE)) pol = get_task_policy(current); if (pol->mode == MPOL_INTERLEAVE) - return alloc_pages_bulk_array_interleave(gfp, pol, + pages = alloc_pages_bulk_array_interleave(gfp, pol, nr_pages, page_array); - - if (pol->mode == MPOL_PREFERRED_MANY) - return alloc_pages_bulk_array_preferred_many(gfp, + else if (pol->mode == MPOL_PREFERRED_MANY) + pages = alloc_pages_bulk_array_preferred_many(gfp, numa_node_id(), pol, nr_pages, page_array); - - return __alloc_pages_bulk(gfp, policy_node(gfp, pol, numa_node_id()), + else + pages = __alloc_pages_bulk(gfp, policy_node(gfp, pol, numa_node_id()), policy_nodemask(gfp, pol), nr_pages, NULL, page_array); + mpol_put(pol); + return pages; } int vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst) From patchwork Sun Dec 4 16:14:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhongkun He X-Patchwork-Id: 13063885 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D123FC4321E for ; Sun, 4 Dec 2022 16:15:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6CED98E0006; Sun, 4 Dec 2022 11:15:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6A6D08E0001; Sun, 4 Dec 2022 11:15:18 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 547738E0006; Sun, 4 Dec 2022 11:15:18 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 4618B8E0001 for ; Sun, 4 Dec 2022 11:15:18 -0500 (EST) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 277231C1F65 for ; Sun, 4 Dec 2022 16:15:18 +0000 (UTC) X-FDA: 80205123516.10.9B0AEE0 Received: from mail-pg1-f172.google.com (mail-pg1-f172.google.com [209.85.215.172]) by imf20.hostedemail.com (Postfix) with ESMTP id 8D4701C000E for ; Sun, 4 Dec 2022 16:15:17 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=CrJxh2mL; spf=pass (imf20.hostedemail.com: domain of hezhongkun.hzk@bytedance.com designates 209.85.215.172 as permitted sender) smtp.mailfrom=hezhongkun.hzk@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1670170517; a=rsa-sha256; cv=none; b=l4RUHNprnX4jPjEBe+/MJ6hKJarZsLsbir1FW3JwGTi/VPaR4cTFW/iEGQcZpEJ2NF73NY SmnZL1WhuGArWFAYhigOe+sWhTO4sXn5VAjDZS1y5rS/oArJ9O/9eaVLlSQfypTZ21EtkS N+TWN8fu3ROefuJ1NNJnQB69u9LLopc= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=CrJxh2mL; spf=pass (imf20.hostedemail.com: domain of hezhongkun.hzk@bytedance.com designates 209.85.215.172 as permitted sender) smtp.mailfrom=hezhongkun.hzk@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1670170517; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=svRUdELzl130u78ZHoVrQ8kLS3/B/K+fs5QOXxsPpJs=; b=Toz1Rc94ftvI7wMCDAEQsYZg7MN74TsYJr7s7zZ9Wqpz8qFGMhfo216J7BV7zv1dZ9KgZb V9maMeCKEpFNGfvt5h8ex2YZ+PQ+s7tswIYOqwuTY8e6qwNwx84VBqmDd2eNBrATqpxm66 eEjKMBlQ70DezWsP932pSMMgA3/MX60= Received: by mail-pg1-f172.google.com with SMTP id s196so8472210pgs.3 for ; Sun, 04 Dec 2022 08:15:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=svRUdELzl130u78ZHoVrQ8kLS3/B/K+fs5QOXxsPpJs=; b=CrJxh2mLnr2Kc1c43TsZ5hHQRYXrbNQuNpWgBlRAvtB/ZBHABi5itLHDIbZd9ExahR 3YhQ/UvkZtGBvY2umOJvuIjTbtfwpJVuACQgmYsiIFvBCm8OMQ9nQcISiO/c1kTwDUFb 5YK2bw4i6KPFIMryLYNRkzut5BKoOOW7855jORF73gHk67NU1Lb+PjeQX16PBo8F6vrJ EqCEfGdMdTesLZ0vAlCC9I/lmpri57kmdp1Jv/+R1qNOJaOEdq+bACUBpqwWHURkDW/+ pV0OhPAxrXggkfQmCHVVuyH1zHQ+E2N0SbOEgaTovcU1BNRjQgD27KP7d1RX6X5q+7Y0 Iwjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=svRUdELzl130u78ZHoVrQ8kLS3/B/K+fs5QOXxsPpJs=; b=1SatJnhgWYOkNAvEiNxkiXqH9FBHCZDv+pJ9+dAqm/ThsC1h8yerc2p5YcStTc4fik hZux2jiNcMQN5cYh6NEkvh9wQ5btxDGnLowGzxLF/ya3TdLp/TUk1lm1Ij+QL6P1ZEFm vybV79nIICYryeQ+4xthAyzknpOwtfk9YsxaejyzUIvFhphBJC1sTJU+Lp7yBDivtc3I VAoTmRvusihSgGGdcvKbND7abz5XwMPQHuw1Njw9PCZm9LRSH4087brX2AxfvQze0/3N JbGiCU7HK3SpWrxACsVGlvteJ334EW1h4dHsqetUcTco4B6WoRzOzHBbkOwOmtgDzT9M Ey8w== X-Gm-Message-State: ANoB5pnMm1+UuxnjhUSqv5RCCiaoRPI2yoVdvWRdZi4FN0TAATRYqgLI j4fK8KkvmCdJb4YSSlNzm7m5xkBumL/EVQtH0to= X-Google-Smtp-Source: AA0mqf697SbKrqDrxqm/6K/eBjLuuHJ5KWVPXCJa7SzwMYf019swaV3VwP+EN36vxJGvS2skm/TMSA== X-Received: by 2002:a63:1f63:0:b0:460:ec46:3645 with SMTP id q35-20020a631f63000000b00460ec463645mr73298033pgm.92.1670170516616; Sun, 04 Dec 2022 08:15:16 -0800 (PST) Received: from Tower.bytedance.net ([139.177.225.248]) by smtp.gmail.com with ESMTPSA id n16-20020a170903111000b0016cf3f124e1sm9000323plh.234.2022.12.04.08.15.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Dec 2022 08:15:16 -0800 (PST) From: Zhongkun He To: mhocko@suse.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, wuyun.abel@bytedance.com, Zhongkun He Subject: [PATCH 3/3] mm: add __rcu symbol for task->mempolicy Date: Mon, 5 Dec 2022 00:14:32 +0800 Message-Id: <20221204161432.2149375-4-hezhongkun.hzk@bytedance.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221204161432.2149375-1-hezhongkun.hzk@bytedance.com> References: <20221204161432.2149375-1-hezhongkun.hzk@bytedance.com> MIME-Version: 1.0 X-Rspam-User: X-Spamd-Result: default: False [1.69 / 9.00]; BAYES_HAM(-5.91)[99.81%]; SORBS_IRL_BL(3.00)[209.85.215.172:from]; R_MISSING_CHARSET(2.50)[]; MID_CONTAINS_FROM(1.00)[]; SUBJECT_HAS_UNDERSCORES(1.00)[]; RCVD_NO_TLS_LAST(0.10)[]; MIME_GOOD(-0.10)[text/plain]; BAD_REP_POLICIES(0.10)[]; TO_DN_SOME(0.00)[]; ARC_NA(0.00)[]; RCPT_COUNT_SEVEN(0.00)[7]; R_DKIM_ALLOW(0.00)[bytedance-com.20210112.gappssmtp.com:s=20210112]; MIME_TRACE(0.00)[0:+]; FROM_EQ_ENVFROM(0.00)[]; R_SPF_ALLOW(0.00)[+ip4:209.85.128.0/17]; DMARC_POLICY_ALLOW(0.00)[bytedance.com,none]; TO_MATCH_ENVRCPT_SOME(0.00)[]; PREVIOUSLY_DELIVERED(0.00)[linux-mm@kvack.org]; DKIM_TRACE(0.00)[bytedance-com.20210112.gappssmtp.com:+]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; RCVD_COUNT_THREE(0.00)[3]; FROM_HAS_DN(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[] X-Rspamd-Queue-Id: 8D4701C000E X-Rspamd-Server: rspam01 X-Stat-Signature: kxe6bqq5n81umtqzxx8ozuj6nfn7pb61 X-HE-Tag: 1670170517-588135 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The task->mempolicy is protected by task_lock in slow path, but there is no locking and reference in hot path for performance. It will be difficult if other processes want to adjust it. It is for these reasons to add __rcu symbol for task mempolicy. There is no need to add RCU protection to vma mempolicy, which is protected by mmap_lock. Suggested-by: Michal Hocko Signed-off-by: Zhongkun He --- include/linux/sched.h | 2 +- mm/mempolicy.c | 9 ++++----- mm/slab.c | 5 +++-- 3 files changed, 8 insertions(+), 8 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index ffb6eb55cd13..c8a297ca61ab 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1252,7 +1252,7 @@ struct task_struct { #endif #ifdef CONFIG_NUMA /* Protected by alloc_lock: */ - struct mempolicy *mempolicy; + struct mempolicy __rcu *mempolicy; short il_prev; short pref_node_fork; #endif diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 0feffb7ff01e..837083fff9c8 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -894,8 +894,7 @@ static long do_set_mempolicy(unsigned short mode, unsigned short flags, goto out; } - old = current->mempolicy; - current->mempolicy = new; + old = rcu_replace_pointer(current->mempolicy, new, true); if (new && new->mode == MPOL_INTERLEAVE) current->il_prev = MAX_NUMNODES-1; task_unlock(current); @@ -999,7 +998,7 @@ static long do_get_mempolicy(int *policy, nodemask_t *nmask, if (err < 0) goto out; *policy = err; - } else if (pol == current->mempolicy && + } else if (pol == rcu_access_pointer(current->mempolicy) && pol->mode == MPOL_INTERLEAVE) { *policy = next_node_in(current->il_prev, pol->nodes); } else { @@ -2065,7 +2064,7 @@ bool init_nodemask_of_mempolicy(nodemask_t *mask) { struct mempolicy *mempolicy; - if (!(mask && current->mempolicy)) + if (!(mask && rcu_access_pointer(current->mempolicy))) return false; task_lock(current); @@ -2426,7 +2425,7 @@ struct mempolicy *__mpol_dup(struct mempolicy *old) return ERR_PTR(-ENOMEM); /* task's mempolicy is protected by alloc_lock */ - if (old == current->mempolicy) { + if (old == rcu_access_pointer(current->mempolicy)) { task_lock(current); *new = *old; task_unlock(current); diff --git a/mm/slab.c b/mm/slab.c index 59c8e28f7b6a..f205869d6c36 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3052,7 +3052,7 @@ static void *alternate_node_alloc(struct kmem_cache *cachep, gfp_t flags) nid_alloc = nid_here = numa_mem_id(); if (cpuset_do_slab_mem_spread() && (cachep->flags & SLAB_MEM_SPREAD)) nid_alloc = cpuset_slab_spread_node(); - else if (current->mempolicy) + else if (rcu_access_pointer(current->mempolicy)) nid_alloc = mempolicy_slab_node(); if (nid_alloc != nid_here) return ____cache_alloc_node(cachep, flags, nid_alloc); @@ -3188,7 +3188,8 @@ __do_cache_alloc(struct kmem_cache *cachep, gfp_t flags, int nodeid) int slab_node = numa_mem_id(); if (nodeid == NUMA_NO_NODE) { - if (current->mempolicy || cpuset_do_slab_mem_spread()) { + if (rcu_access_pointer(current->mempolicy) || + cpuset_do_slab_mem_spread()) { objp = alternate_node_alloc(cachep, flags); if (objp) goto out;