From patchwork Fri Apr 18 17:49:58 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 14057561 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EBD61C369CA for ; Fri, 18 Apr 2025 17:50:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5B1226B02B0; Fri, 18 Apr 2025 13:50:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 510256B02B1; Fri, 18 Apr 2025 13:50:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2C6EF6B02B2; Fri, 18 Apr 2025 13:50:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 06EE56B02B0 for ; Fri, 18 Apr 2025 13:50:20 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 6E892BBECF for ; Fri, 18 Apr 2025 17:50:21 +0000 (UTC) X-FDA: 83347903842.16.067A18D Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) by imf06.hostedemail.com (Postfix) with ESMTP id A49BE180009 for ; Fri, 18 Apr 2025 17:50:19 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="Bl5o4/oE"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf06.hostedemail.com: domain of 32pACaAYKCE0796t2qv33v0t.r310x29C-11zAprz.36v@flex--surenb.bounces.google.com designates 209.85.215.202 as permitted sender) smtp.mailfrom=32pACaAYKCE0796t2qv33v0t.r310x29C-11zAprz.36v@flex--surenb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1744998619; a=rsa-sha256; cv=none; b=j8rNo7l43hfoWCiKYdQSQMisQ3xmdDHFokXC2uUD9mklwRc7f3uSmHQOaTSKb3+twWDPiQ UVMkfT5ytnZtrgJW4OSNerK4LoZKgIpk9tGc4ZPWCCd2Go2mb1Qd5bEnEvXh6qblMQyNN3 YdU8pqTj9F2cZwhY340JgHBqlhTwp7w= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="Bl5o4/oE"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf06.hostedemail.com: domain of 32pACaAYKCE0796t2qv33v0t.r310x29C-11zAprz.36v@flex--surenb.bounces.google.com designates 209.85.215.202 as permitted sender) smtp.mailfrom=32pACaAYKCE0796t2qv33v0t.r310x29C-11zAprz.36v@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1744998619; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/zYtkrY7HUMXj6I83iq2T2dcIR3++4RWvz/bo+0QsbU=; b=t1pcmhuv2yAWEK6V4OenDMfZhCOikDOpnVoJqWda5kei9hkL7BltcrkWyJ2aNZ/XgwbNJr 55+Jf3qXUt52a1fLsRJDN1da14q+ks4o9fN3bqbSPKcfj3xEtIPsQeT3Nxa43P14MRdO/0 Y/Y0DYPaIBsZB5IUQhsKZyAdKNKLWv4= Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-b078af4b07dso1206864a12.3 for ; Fri, 18 Apr 2025 10:50:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1744998618; x=1745603418; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=/zYtkrY7HUMXj6I83iq2T2dcIR3++4RWvz/bo+0QsbU=; b=Bl5o4/oEGuivrMB5oHBtGXj4klSXUDgoTNG10YlIUhEEfCbtDCKLXx/Hk+X7Dpf1rq UoQPE9mJY+gyxuoLv3XpWG1UzD1bRqLkasGghDm7hnhFkGAsWHkTRra4pBD0Ulm9Kyu4 Vt29QaCPmG34l6QWm3qlTPrWxnB6mt4636k4agHWz46DDj2aKkmcianKAuLhHAf0LM7+ Zh4vHquL5hkAdwuedEd+HaTF1IFxZENW4wmSy6gMhA096llUGOXQ3l/LXcWEmyQz0Zgm VGbsJxv2WwOq8QZ+joteN3TJXOxVcdsIEE2xeSEUpwc4+QJaWmxEkBNXZY/6pdu1zlWU sGjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744998618; x=1745603418; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=/zYtkrY7HUMXj6I83iq2T2dcIR3++4RWvz/bo+0QsbU=; b=hRnrlAz8PIBI9pp2npsI1l51D2bOtPN1h48UiMqsqjkwrs9CjNR1Bh+OvxUeZnpBU6 MK3K30bshoaQe840zNiUEjVGtoeQm/Ou6rvdS5OPcNaFu3KO6TMKYU8MVTx39qWSYImZ U81hIuWrNIXAOflI1vBHemmsqFSFLsjI99/nuChhgvlaV0w3/SpkbZcKikici/bOu6oN OMUhBoE9gDSkVBAT9iabnqMIVmxEX+kdhNwD83kIksXy+Dod7n2YjKNcnhuJrE5Xp3gV 3l3koeNWzzaERXdvyk1s+e+zwbau+XJU8JvzxnCC51PQBGJSw5/KIrpzf1u9Rkx7SyOS O10A== X-Forwarded-Encrypted: i=1; AJvYcCUYGupwMon2/uiZ+4S9WDbxuhzWMAMkTuq9R3bIt6OpumT9HRwSCwguPn9+xwhBMM26/DTvcDmTLQ==@kvack.org X-Gm-Message-State: AOJu0Yy2p58I2N+Lkz6y+t1Q1g3WUjpHhR+VqCOuy1jNu/UJ8KMB9maj 85Ywxw15TuTT840etJmvDAfjO3HgNawTzKnfhuOWjT0PN+QhLhkg+OfTXK3GpW7fs7p0hWpji49 8Rw== X-Google-Smtp-Source: AGHT+IFtA4TMsuStPYu7aoBjj//DZ8se4BNHvSXu+W66hYuPWTAds876LFCU1b60VeWLvlCjpqZ1/KyDDUw= X-Received: from pjbpa2.prod.google.com ([2002:a17:90b:2642:b0:2ef:7af4:5e8e]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:528a:b0:2ff:7ad4:77b1 with SMTP id 98e67ed59e1d1-3087bb3973fmr6045262a91.2.1744998618622; Fri, 18 Apr 2025 10:50:18 -0700 (PDT) Date: Fri, 18 Apr 2025 10:49:58 -0700 In-Reply-To: <20250418174959.1431962-1-surenb@google.com> Mime-Version: 1.0 References: <20250418174959.1431962-1-surenb@google.com> X-Mailer: git-send-email 2.49.0.805.g082f7c87e0-goog Message-ID: <20250418174959.1431962-8-surenb@google.com> Subject: [PATCH v3 7/8] mm/maps: read proc/pid/maps under RCU From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: Liam.Howlett@oracle.com, lorenzo.stoakes@oracle.com, david@redhat.com, vbabka@suse.cz, peterx@redhat.com, jannh@google.com, hannes@cmpxchg.org, mhocko@kernel.org, paulmck@kernel.org, shuah@kernel.org, adobriyan@gmail.com, brauner@kernel.org, josef@toxicpanda.com, yebin10@huawei.com, linux@weissschuh.net, willy@infradead.org, osalvador@suse.de, andrii@kernel.org, ryan.roberts@arm.com, christophe.leroy@csgroup.eu, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, surenb@google.com X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: A49BE180009 X-Stat-Signature: x6ycbp9zmcom737st4qg5nye8wu5djg9 X-HE-Tag: 1744998619-338423 X-HE-Meta: U2FsdGVkX1/d9A4RBrDDOiWqlbsE3vhpdHqnAEhwFOtGvQNKzlC4Hyu8ul8+VZ86A4UQDBl0xAaxEdKsdrUvwuKx6B+T+qxVLVtUs56RxjT+sW/45o23edf4zULfNmim4SK6xWFrZVhacTQT5vUeh95fr9rv+ztqJX2SVaQ5xOsYAOUlUWFMI+lSnbTmTcR6D9VlwjlgqyirO8R013cEj10+mX17ZYoRU7eFB0lihoYlx3QdXsnod1NUUGYQKUGFjvDZlg7omt8rhowLul5ExZb23poVq3+/Fnw/0TxyIHpfBa6hwhj/82KqIrTXVLHXc0Yo5fgloG/FkMEtHK2zh4aDtTM/A+Xz47tpJLquWRU2aCUakodR8h1DYpLuN6hmCcYiK3MZaCSTg25v4tZhzRiqBxb8UX7kqDTt6ESKsJoPP6t68GOmNIdA4S5Yb/A02IR7LdWiCa4t7IqdeNJh8mYb8nSmKqtQei/XaxUyVtI40I/pt0OsbkCxuobeKTs5yFeLi1k6nxPuzOA3KoUYniXYuZc8XnB5kLRasZYIEB59hKa9LzeED4+gDRRiJljTGl3CSjnJNFud9iu6DzQgKCVMngQwiGVQHCKqS8mXL9P9AoVWOHHExGgjAKkbVQ2fW48zy42NR6cfABd+IXi0ZttYGr2ReVZ/uBy5hExq75EOla+fxfgZvjnrh0FzxopShYZZXqXIW0zsYW7ce2r4X+mc88hQCM4dyLJqd+Jes8nQqUW7+Wn2M+z+BfXxgL/4lT6DQrw3LmJBgXowwMD4fv4oMqRYWrC0sLCxclOlcJAGBfFp07wORUrwTApSbE+JGSxgnL/+dh6MKbg7MGRsTXrlS7TxjcZ93ChLXX3gGMkZq/sYqjVUhpkxvPM8lTzp9Pw2n7B0wjvqvUVTPMXBZ0YL4g3BwiXCKHiJHOtaoA0ACSDYiA46ybZNFuIZAvKZ0dL9KbiFaAtaVbSmLJ/ XY0wCbX2 XHzCsInC/HjsDj+knTw8IgzBchoVscHSs/7Gilch41SV3sd1Kf0pvRmQUZv9NDmnVvXNOZllYVFGgiso2Q7WoSxsJlN79psTu9iEef5v2iGAam7i6We9XIV8RvC7S28YltY9G4q/oh90zkg9aY7lnXr1garOzng+YTITIT0y7HAuecwVXoGPbTp9ln4+47PZd4T4td3olIfK2L/xp+8Se8hb0tZBKOhhXkjr6yUSts3EXMgDTtUIQf/X90LJjjV2tZeAgA/gtrBH0Oo5ZQR7IVy3GhBMAIaD0enozpUSvgg9Ke55SvUWWBkg+NldGWjHb2BZ9BdG45VP4li5qdpDlI/Yx5p9LODqeejZvXfZzPhPvU2JcIIazIf8yUNQKFBcJWo62CFOETOate5IqwQrlebQSF6MFOG90Pz8QeL5R5Ju4VqW7PwWdFvxLuEno+debiZXUTKJFUP/wPtpQUSs6byUH5rnw2m3L3RKpD7icoDA1Mhd7h6AEewb4DniIeqFwGe18L5MHzQsp/9H+aX5RIa1o954khf2qTzhimYThmV80uZM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: With maple_tree supporting vma tree traversal under RCU and vma and its important members being RCU-safe, /proc/pid/maps can be read under RCU and without the need to read-lock mmap_lock. However vma content can change from under us, therefore we make a copy of the vma and we pin pointer fields used when generating the output (currently only vm_file and anon_name). Afterwards we check for concurrent address space modifications, wait for them to end and retry. While we take the mmap_lock for reading during such contention, we do that momentarily only to record new mm_wr_seq counter. This change is designed to reduce mmap_lock contention and prevent a process reading /proc/pid/maps files (often a low priority task, such as monitoring/data collection services) from blocking address space updates. Note that this change has a userspace visible disadvantage: it allows for sub-page data tearing as opposed to the previous mechanism where data tearing could happen only between pages of generated output data. Since current userspace considers data tearing between pages to be acceptable, we assume is will be able to handle sub-page data tearing as well. Signed-off-by: Suren Baghdasaryan --- fs/proc/internal.h | 6 ++ fs/proc/task_mmu.c | 170 ++++++++++++++++++++++++++++++++++---- include/linux/mm_inline.h | 18 ++++ 3 files changed, 177 insertions(+), 17 deletions(-) diff --git a/fs/proc/internal.h b/fs/proc/internal.h index 96122e91c645..6e1169c1f4df 100644 --- a/fs/proc/internal.h +++ b/fs/proc/internal.h @@ -379,6 +379,12 @@ struct proc_maps_private { struct task_struct *task; struct mm_struct *mm; struct vma_iterator iter; + bool mmap_locked; + loff_t last_pos; +#ifdef CONFIG_PER_VMA_LOCK + unsigned int mm_wr_seq; + struct vm_area_struct vma_copy; +#endif #ifdef CONFIG_NUMA struct mempolicy *task_mempolicy; #endif diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index b9e4fbbdf6e6..f9d50a61167c 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -127,13 +127,130 @@ static void release_task_mempolicy(struct proc_maps_private *priv) } #endif -static struct vm_area_struct *proc_get_vma(struct proc_maps_private *priv, - loff_t *ppos) +#ifdef CONFIG_PER_VMA_LOCK + +static const struct seq_operations proc_pid_maps_op; + +/* + * Take VMA snapshot and pin vm_file and anon_name as they are used by + * show_map_vma. + */ +static int get_vma_snapshot(struct proc_maps_private *priv, struct vm_area_struct *vma) +{ + struct vm_area_struct *copy = &priv->vma_copy; + int ret = -EAGAIN; + + memcpy(copy, vma, sizeof(*vma)); + if (copy->vm_file && !get_file_rcu(©->vm_file)) + goto out; + + if (!anon_vma_name_get_if_valid(copy)) + goto put_file; + + if (!mmap_lock_speculate_retry(priv->mm, priv->mm_wr_seq)) + return 0; + + /* Address space got modified, vma might be stale. Re-lock and retry. */ + rcu_read_unlock(); + ret = mmap_read_lock_killable(priv->mm); + if (!ret) { + /* mmap_lock_speculate_try_begin() succeeds when holding mmap_read_lock */ + mmap_lock_speculate_try_begin(priv->mm, &priv->mm_wr_seq); + mmap_read_unlock(priv->mm); + ret = -EAGAIN; + } + + rcu_read_lock(); + + anon_vma_name_put_if_valid(copy); +put_file: + if (copy->vm_file) + fput(copy->vm_file); +out: + return ret; +} + +static void put_vma_snapshot(struct proc_maps_private *priv) +{ + struct vm_area_struct *vma = &priv->vma_copy; + + anon_vma_name_put_if_valid(vma); + if (vma->vm_file) + fput(vma->vm_file); +} + +static inline bool drop_mmap_lock(struct seq_file *m, struct proc_maps_private *priv) +{ + /* + * smaps and numa_maps perform page table walk, therefore require + * mmap_lock but maps can be read under RCU. + */ + if (m->op != &proc_pid_maps_op) + return false; + + /* mmap_lock_speculate_try_begin() succeeds when holding mmap_read_lock */ + mmap_lock_speculate_try_begin(priv->mm, &priv->mm_wr_seq); + mmap_read_unlock(priv->mm); + rcu_read_lock(); + memset(&priv->vma_copy, 0, sizeof(priv->vma_copy)); + + return true; +} + +static struct vm_area_struct *get_stable_vma(struct vm_area_struct *vma, + struct proc_maps_private *priv, + loff_t last_pos) +{ + int ret; + + put_vma_snapshot(priv); + while ((ret = get_vma_snapshot(priv, vma)) == -EAGAIN) { + /* lookup the vma at the last position again */ + vma_iter_init(&priv->iter, priv->mm, last_pos); + vma = vma_next(&priv->iter); + } + + return ret ? ERR_PTR(ret) : &priv->vma_copy; +} + +#else /* CONFIG_PER_VMA_LOCK */ + +/* Without per-vma locks VMA access is not RCU-safe */ +static inline bool drop_mmap_lock(struct seq_file *m, + struct proc_maps_private *priv) +{ + return false; +} + +static struct vm_area_struct *get_stable_vma(struct vm_area_struct *vma, + struct proc_maps_private *priv, + loff_t last_pos) +{ + return vma; +} + +#endif /* CONFIG_PER_VMA_LOCK */ + +static struct vm_area_struct *proc_get_vma(struct seq_file *m, loff_t *ppos) { + struct proc_maps_private *priv = m->private; struct vm_area_struct *vma = vma_next(&priv->iter); + if (vma && !priv->mmap_locked) + vma = get_stable_vma(vma, priv, *ppos); + + if (IS_ERR(vma)) + return vma; + if (vma) { - *ppos = vma->vm_start; + /* Store previous position to be able to restart if needed */ + priv->last_pos = *ppos; + /* + * Track the end of the reported vma to ensure position changes + * even if previous vma was merged with the next vma and we + * found the extended vma with the same vm_start. + */ + *ppos = vma->vm_end; } else { *ppos = -2UL; vma = get_gate_vma(priv->mm); @@ -148,6 +265,7 @@ static void *m_start(struct seq_file *m, loff_t *ppos) unsigned long last_addr = *ppos; struct mm_struct *mm; + priv->mmap_locked = true; /* See m_next(). Zero at the start or after lseek. */ if (last_addr == -1UL) return NULL; @@ -170,12 +288,18 @@ static void *m_start(struct seq_file *m, loff_t *ppos) return ERR_PTR(-EINTR); } + /* Drop mmap_lock if possible */ + if (drop_mmap_lock(m, priv)) + priv->mmap_locked = false; + + if (last_addr > 0) + *ppos = last_addr = priv->last_pos; vma_iter_init(&priv->iter, mm, last_addr); hold_task_mempolicy(priv); if (last_addr == -2UL) return get_gate_vma(mm); - return proc_get_vma(priv, ppos); + return proc_get_vma(m, ppos); } static void *m_next(struct seq_file *m, void *v, loff_t *ppos) @@ -184,7 +308,7 @@ static void *m_next(struct seq_file *m, void *v, loff_t *ppos) *ppos = -1UL; return NULL; } - return proc_get_vma(m->private, ppos); + return proc_get_vma(m, ppos); } static void m_stop(struct seq_file *m, void *v) @@ -196,7 +320,10 @@ static void m_stop(struct seq_file *m, void *v) return; release_task_mempolicy(priv); - mmap_read_unlock(mm); + if (priv->mmap_locked) + mmap_read_unlock(mm); + else + rcu_read_unlock(); mmput(mm); put_task_struct(priv->task); priv->task = NULL; @@ -243,14 +370,20 @@ static int do_maps_open(struct inode *inode, struct file *file, static void get_vma_name(struct vm_area_struct *vma, const struct path **path, const char **name, - const char **name_fmt) + const char **name_fmt, bool mmap_locked) { - struct anon_vma_name *anon_name = vma->vm_mm ? anon_vma_name(vma) : NULL; + struct anon_vma_name *anon_name; *name = NULL; *path = NULL; *name_fmt = NULL; + if (vma->vm_mm) + anon_name = mmap_locked ? anon_vma_name(vma) : + anon_vma_name_get_rcu(vma); + else + anon_name = NULL; + /* * Print the dentry name for named mappings, and a * special [heap] marker for the heap: @@ -266,39 +399,41 @@ static void get_vma_name(struct vm_area_struct *vma, } else { *path = file_user_path(vma->vm_file); } - return; + goto out; } if (vma->vm_ops && vma->vm_ops->name) { *name = vma->vm_ops->name(vma); if (*name) - return; + goto out; } *name = arch_vma_name(vma); if (*name) - return; + goto out; if (!vma->vm_mm) { *name = "[vdso]"; - return; + goto out; } if (vma_is_initial_heap(vma)) { *name = "[heap]"; - return; + goto out; } if (vma_is_initial_stack(vma)) { *name = "[stack]"; - return; + goto out; } if (anon_name) { *name_fmt = "[anon:%s]"; *name = anon_name->name; - return; } +out: + if (anon_name && !mmap_locked) + anon_vma_name_put(anon_name); } static void show_vma_header_prefix(struct seq_file *m, @@ -324,6 +459,7 @@ static void show_vma_header_prefix(struct seq_file *m, static void show_map_vma(struct seq_file *m, struct vm_area_struct *vma) { + struct proc_maps_private *priv = m->private; const struct path *path; const char *name_fmt, *name; vm_flags_t flags = vma->vm_flags; @@ -344,7 +480,7 @@ show_map_vma(struct seq_file *m, struct vm_area_struct *vma) end = vma->vm_end; show_vma_header_prefix(m, start, end, flags, pgoff, dev, ino); - get_vma_name(vma, &path, &name, &name_fmt); + get_vma_name(vma, &path, &name, &name_fmt, priv->mmap_locked); if (path) { seq_pad(m, ' '); seq_path(m, path, "\n"); @@ -549,7 +685,7 @@ static int do_procmap_query(struct proc_maps_private *priv, void __user *uarg) const char *name_fmt; size_t name_sz = 0; - get_vma_name(vma, &path, &name, &name_fmt); + get_vma_name(vma, &path, &name, &name_fmt, true); if (path || name_fmt || name) { name_buf = kmalloc(name_buf_sz, GFP_KERNEL); diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index 9ac2d92d7ede..436512f1e759 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -434,6 +434,21 @@ static inline bool anon_vma_name_eq(struct anon_vma_name *anon_name1, struct anon_vma_name *anon_vma_name_get_rcu(struct vm_area_struct *vma); +/* + * Takes a reference if anon_vma is valid and stable (has references). + * Fails only if anon_vma is valid but we failed to get a reference. + */ +static inline bool anon_vma_name_get_if_valid(struct vm_area_struct *vma) +{ + return !vma->anon_name || anon_vma_name_get_rcu(vma); +} + +static inline void anon_vma_name_put_if_valid(struct vm_area_struct *vma) +{ + if (vma->anon_name) + anon_vma_name_put(vma->anon_name); +} + #else /* CONFIG_ANON_VMA_NAME */ static inline void anon_vma_name_get(struct anon_vma_name *anon_name) {} static inline void anon_vma_name_put(struct anon_vma_name *anon_name) {} @@ -453,6 +468,9 @@ struct anon_vma_name *anon_vma_name_get_rcu(struct vm_area_struct *vma) return NULL; } +static inline bool anon_vma_name_get_if_valid(struct vm_area_struct *vma) { return true; } +static inline void anon_vma_name_put_if_valid(struct vm_area_struct *vma) {} + #endif /* CONFIG_ANON_VMA_NAME */ static inline void init_tlb_flush_pending(struct mm_struct *mm)