From patchwork Tue Jan 23 23:10:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13528262 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 498F2210F6 for ; Tue, 23 Jan 2024 23:10:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706051419; cv=none; b=iEZlRDIX9X/n7qjoljdNkQVleTOWr+6UQedauhOoLPkh6w/AHpw6lW/IcEi6rEknjpP0R+W5Q31NRCM4QXeM6cdgNVoMOB9yhycaK8YQHublroXuq7hsKpO/rCpGWcdSGI5VyM6XBemNmY6PvAXokCMhepZoYs79ZdiBoSLEwrI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706051419; c=relaxed/simple; bh=6Lrv5xwMfaDaPsl1VCKPLCnIY6z/5ht1dxDtK8LPTm4=; h=Date:Mime-Version:Message-ID:Subject:From:To:Cc:Content-Type; b=eMi9eTZ8P2wL/7HbvQpeOMkA7dGCpyEVRC/UMrx6/hKsJRwiFBoGxhY8yw1RVz9Frk0w7rfSeM0Sl4BM2issNurjmdKsWCZl0OaElmfqBVkJcIShU4rtRGtYKjlyvsKlvQbgeA444SavhPFPAZWDcFcBK2qwuNE5jj+D7gU2OTw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=O1nssBkc; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="O1nssBkc" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-dbf618042daso6312493276.0 for ; Tue, 23 Jan 2024 15:10:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1706051417; x=1706656217; darn=vger.kernel.org; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=oXyU4pkvCRYAyA2TUvsUC8dh5sAPzBBH4qagn6tOpF0=; b=O1nssBkcERP8HuhOfwNn0vUzKmXVyfovdquN26R/RqtuCtHs4lbLLYf0FEFka3ba1l l4qd8TAShqCSlMRjozCdOopTxWnplFUMF11Q3uZvlQ9S/KAiCRDNx+SU+4mLaFzfTv/w DSPnPrzw37OFUkBxNR0dyQJOOx8pSn5Rv2hmYVxixdEOre8L0jvkc9cFWu2z3AVi2GNn x4GYNIvO+Wr9dl6aMcO/QC66BMmErtSY0ZDdmy0uWqK0/aLnu0+QoZ7QS+4k4qdMk8AC HXeyNmgjoVPTvPVOjeD5GhEn+/d5xXADZ1viPwn4av184uQLBBRuv+5vCsP5OyIjgB89 8Fzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706051417; x=1706656217; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=oXyU4pkvCRYAyA2TUvsUC8dh5sAPzBBH4qagn6tOpF0=; b=KlwbwuRI9+BmUbSd7KF97XuBV9Ho0oUXA4jm/91tVi4wBJuFfcS8FmTOkkc+7VaayJ WxpBVqC42r915M4beXcacNupsC3tJ0zpt/vyWdRahFaR3WVSMw7KyXboY6fldWjQwPC0 3tGS1uNtyXzq8keMAE9APCmxLF5/c/qob4FbK48Q9TXVOpHCFx5NCobDk2nBYnDpQp/V viP81EqPe2ZJoR2G/UfAs5Wjp/dThR/ZU7D54mkgMfel4ab8uP33wlAbQyBCtbBMKm8J 7nkVQGOE7UHgoV9lzO0kiqJUHAyinq4oGeCwdmKUfvw2/VQLLtQiZ14lR+D8qUrlzg4i dSSQ== X-Gm-Message-State: AOJu0Yz0Padcsiok5R7SK0K4P+tF3sjqE46sA/Oj7gH9twyt1R5CS09F IK1F6Swfw75fzCk+h1y3XJwqsBFTouthFw59CbZUs8zY4qBL8cwpsUIEKzZnzv6s1ZSIfmGdcE0 ilg== X-Google-Smtp-Source: AGHT+IEgxI304pcV7zFy8N/84TxapaHhJhR9pXZLWhwJ6Iatmm8yKLxxB6dJA4KRTnvXodtXbWXNbbtsUkg= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:8fc3:c34f:d407:388]) (user=surenb job=sendgmr) by 2002:a05:6902:218b:b0:dc2:5456:d9ac with SMTP id dl11-20020a056902218b00b00dc25456d9acmr377765ybb.5.1706051417317; Tue, 23 Jan 2024 15:10:17 -0800 (PST) Date: Tue, 23 Jan 2024 15:10:12 -0800 Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 X-Mailer: git-send-email 2.43.0.429.g432eaa2c6b-goog Message-ID: <20240123231014.3801041-1-surenb@google.com> Subject: [PATCH v2 1/3] mm: make vm_area_struct anon_name field RCU-safe From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: viro@zeniv.linux.org.uk, brauner@kernel.org, jack@suse.cz, dchinner@redhat.com, casey@schaufler-ca.com, ben.wolsieffer@hefring.com, paulmck@kernel.org, david@redhat.com, avagin@google.com, usama.anjum@collabora.com, peterx@redhat.com, hughd@google.com, ryan.roberts@arm.com, wangkefeng.wang@huawei.com, Liam.Howlett@Oracle.com, yuzhao@google.com, axelrasmussen@google.com, lstoakes@gmail.com, talumbau@google.com, willy@infradead.org, vbabka@suse.cz, mgorman@techsingularity.net, jhubbard@nvidia.com, vishal.moola@gmail.com, mathieu.desnoyers@efficios.com, dhowells@redhat.com, jgg@ziepe.ca, sidhartha.kumar@oracle.com, andriy.shevchenko@linux.intel.com, yangxingui@huawei.com, keescook@chromium.org, sj@kernel.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, kernel-team@android.com, surenb@google.com For lockless /proc/pid/maps reading we have to ensure all the fields used when generating the output are RCU-safe. The only pointer fields in vm_area_struct which are used to generate that file's output are vm_file and anon_name. vm_file is RCU-safe but anon_name is not. Make anon_name RCU-safe as well. Signed-off-by: Suren Baghdasaryan --- include/linux/mm_inline.h | 10 +++++++++- include/linux/mm_types.h | 3 ++- mm/madvise.c | 30 ++++++++++++++++++++++++++---- 3 files changed, 37 insertions(+), 6 deletions(-) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index f4fe593c1400..bbdb0ca857f1 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -389,7 +389,7 @@ static inline void dup_anon_vma_name(struct vm_area_struct *orig_vma, struct anon_vma_name *anon_name = anon_vma_name(orig_vma); if (anon_name) - new_vma->anon_name = anon_vma_name_reuse(anon_name); + rcu_assign_pointer(new_vma->anon_name, anon_vma_name_reuse(anon_name)); } static inline void free_anon_vma_name(struct vm_area_struct *vma) @@ -411,6 +411,8 @@ static inline bool anon_vma_name_eq(struct anon_vma_name *anon_name1, !strcmp(anon_name1->name, anon_name2->name); } +struct anon_vma_name *anon_vma_name_get_rcu(struct vm_area_struct *vma); + #else /* CONFIG_ANON_VMA_NAME */ static inline void anon_vma_name_get(struct anon_vma_name *anon_name) {} static inline void anon_vma_name_put(struct anon_vma_name *anon_name) {} @@ -424,6 +426,12 @@ static inline bool anon_vma_name_eq(struct anon_vma_name *anon_name1, return true; } +static inline +struct anon_vma_name *anon_vma_name_get_rcu(struct vm_area_struct *vma) +{ + return NULL; +} + #endif /* CONFIG_ANON_VMA_NAME */ static inline void init_tlb_flush_pending(struct mm_struct *mm) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 8b611e13153e..bbe1223cd992 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -545,6 +545,7 @@ struct vm_userfaultfd_ctx {}; struct anon_vma_name { struct kref kref; + struct rcu_head rcu; /* The name needs to be at the end because it is dynamically sized. */ char name[]; }; @@ -699,7 +700,7 @@ struct vm_area_struct { * terminated string containing the name given to the vma, or NULL if * unnamed. Serialized by mmap_lock. Use anon_vma_name to access. */ - struct anon_vma_name *anon_name; + struct anon_vma_name __rcu *anon_name; #endif #ifdef CONFIG_SWAP atomic_long_t swap_readahead_info; diff --git a/mm/madvise.c b/mm/madvise.c index 912155a94ed5..0f222d464254 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -88,14 +88,15 @@ void anon_vma_name_free(struct kref *kref) { struct anon_vma_name *anon_name = container_of(kref, struct anon_vma_name, kref); - kfree(anon_name); + kfree_rcu(anon_name, rcu); } struct anon_vma_name *anon_vma_name(struct vm_area_struct *vma) { mmap_assert_locked(vma->vm_mm); - return vma->anon_name; + return rcu_dereference_protected(vma->anon_name, + rwsem_is_locked(&vma->vm_mm->mmap_lock)); } /* mmap_lock should be write-locked */ @@ -105,7 +106,7 @@ static int replace_anon_vma_name(struct vm_area_struct *vma, struct anon_vma_name *orig_name = anon_vma_name(vma); if (!anon_name) { - vma->anon_name = NULL; + rcu_assign_pointer(vma->anon_name, NULL); anon_vma_name_put(orig_name); return 0; } @@ -113,11 +114,32 @@ static int replace_anon_vma_name(struct vm_area_struct *vma, if (anon_vma_name_eq(orig_name, anon_name)) return 0; - vma->anon_name = anon_vma_name_reuse(anon_name); + rcu_assign_pointer(vma->anon_name, anon_vma_name_reuse(anon_name)); anon_vma_name_put(orig_name); return 0; } + +/* + * Returned anon_vma_name is stable due to elevated refcount but not guaranteed + * to be assigned to the original VMA after the call. + */ +struct anon_vma_name *anon_vma_name_get_rcu(struct vm_area_struct *vma) +{ + struct anon_vma_name __rcu *anon_name; + + WARN_ON_ONCE(!rcu_read_lock_held()); + + anon_name = rcu_dereference(vma->anon_name); + if (!anon_name) + return NULL; + + if (unlikely(!kref_get_unless_zero(&anon_name->kref))) + return NULL; + + return anon_name; +} + #else /* CONFIG_ANON_VMA_NAME */ static int replace_anon_vma_name(struct vm_area_struct *vma, struct anon_vma_name *anon_name) From patchwork Tue Jan 23 23:10:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13528263 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 66F0D53E2B for ; Tue, 23 Jan 2024 23:10:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706051422; cv=none; b=S4gWXrz2k6zPcers0+0+t0GaiBGbL0xFyozeHtSWP05P2UgADzpiTt9Z9Zwl6IHDk6UFFB0kmLTZNzjgeKo08hR/SWqbVEtPSGBl3Dakxem4GKqilZFW2eCKCw5+PGIpzquwMpCAftvZC8Ct0+v15Tvz6xwkGxDqOWA11p4Np/Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706051422; c=relaxed/simple; bh=VSmKcMfyN3MYuxa3M+8VItvp5qj4UNGBOJKK9wA9Qlk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=W1s1fQnoZAHKziRlBFvS9ptvnwt3k7kXr8ZVAlHoG5M9j0kY7hgqs5j+ovWQbMYSg6xHBH2O/NpP1Km9I0PlCT4QryWuWq2iSI/9fm48slmLFeUn0m4ZFAjqJE6AXPV0o4rtV7SYAMcyTApysLubam0cubElAHHGckVcdzgH46c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=XuV9T5RA; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="XuV9T5RA" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-5ff7cf2fd21so51161117b3.1 for ; Tue, 23 Jan 2024 15:10:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1706051419; x=1706656219; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=OLp6U6EsMSw583KQsA7U2jc3UsvPAv4xgtxefIxudO0=; b=XuV9T5RAXMXEf4hcRZSSpo96qObe0afVE4WoZ3Na/Td2ZSGF6RVXD9I3catbFL61Fx 7EcKFg92EehqiNaSHycU78dw6JEa1uITxBKB0TuymA/Ckr9psHv6w8IL2Eajcizhicwa vY0zblX3cdStLP6OEJbVlHPVztheMt+4Q4NI6BhoNfGBEtF8+TDbCBSc7c1Fes5OT2D6 RJgT9LKZETNfBXLqIhVZ8zNuArmkQetb4nq6naANciE3ke51eToTIjkqecGYBRMQFJzM 0DM+2n0FHchVeKP7ElAtbgNGHyv85yZkJKkuZHYlLjDhndZsITipsOjHlgjT5tMacSZU l/rg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706051419; x=1706656219; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=OLp6U6EsMSw583KQsA7U2jc3UsvPAv4xgtxefIxudO0=; b=cap15qQHpX7uWoSNDLIGQ1ipr/87rrO7vKpp1D5YwVhwFgRZdOCUJgpDKq67Z045Kz PnS88pHzwq9C1y9ecFWDGQYnqyaMD4tzDjJWSMi9dodZ+HfeVHdShl/FDow5NKGEpm9Z 1QyBHkfN6HXtxyOKDHK3lSYlfHKXlgPLKMIv30pASK9FLNhsWi3pKsipLspAwvb9gX7Z +Kefcn0T5cy1pppw+SCU+n9rqt92q+dV4vOtyOZBMG1mv/+vBtdKRLiB0QV45nXf0Jtw rW5wzqIso4T+ns52eKPWXq1yHQjgHzUIynTUiO0A9JH7dTD3JY69BzRob/Ljrhq8pKDw Mvaw== X-Gm-Message-State: AOJu0YwhASPEcdhLBqo3JE8WqCB+AelqlbLU8+XB/aVPZcprI8NlxC5/ eDjKLJDuVoUwoaNcTMVJqWIYdAzpCkuaFl0CFUS3D5tjo4jstnnmzRjZw9O8WJxeaUsVorgBEp2 Hjg== X-Google-Smtp-Source: AGHT+IFX5qIebM/ZQiKUEXGTW3oxR6pBTWd2HR+W3VQ1MxVAngHzessR/YRxQzpsjGe8I1sOL4WmImG7jew= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:8fc3:c34f:d407:388]) (user=surenb job=sendgmr) by 2002:a81:9896:0:b0:5fc:7f94:da64 with SMTP id p144-20020a819896000000b005fc7f94da64mr134477ywg.5.1706051419411; Tue, 23 Jan 2024 15:10:19 -0800 (PST) Date: Tue, 23 Jan 2024 15:10:13 -0800 In-Reply-To: <20240123231014.3801041-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240123231014.3801041-1-surenb@google.com> X-Mailer: git-send-email 2.43.0.429.g432eaa2c6b-goog Message-ID: <20240123231014.3801041-2-surenb@google.com> Subject: [PATCH v2 2/3] mm: add mm_struct sequence number to detect write locks From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: viro@zeniv.linux.org.uk, brauner@kernel.org, jack@suse.cz, dchinner@redhat.com, casey@schaufler-ca.com, ben.wolsieffer@hefring.com, paulmck@kernel.org, david@redhat.com, avagin@google.com, usama.anjum@collabora.com, peterx@redhat.com, hughd@google.com, ryan.roberts@arm.com, wangkefeng.wang@huawei.com, Liam.Howlett@Oracle.com, yuzhao@google.com, axelrasmussen@google.com, lstoakes@gmail.com, talumbau@google.com, willy@infradead.org, vbabka@suse.cz, mgorman@techsingularity.net, jhubbard@nvidia.com, vishal.moola@gmail.com, mathieu.desnoyers@efficios.com, dhowells@redhat.com, jgg@ziepe.ca, sidhartha.kumar@oracle.com, andriy.shevchenko@linux.intel.com, yangxingui@huawei.com, keescook@chromium.org, sj@kernel.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, kernel-team@android.com, surenb@google.com Provide a way for lockless mm_struct users to detect whether mm might have been changed since some specific point in time. The API provided allows the user to record a counter when it starts using the mm and later use that counter to check if anyone write-locked mmap_lock since the counter was recorded. Recording the counter value should be done while holding mmap_lock at least for reading to prevent the counter from concurrent changes. Every time mmap_lock is write-locked mm_struct updates its mm_wr_seq counter so that checks against counters recorded before that would fail, indicating a possibility of mm being modified. Signed-off-by: Suren Baghdasaryan --- include/linux/mm_types.h | 2 ++ include/linux/mmap_lock.h | 22 ++++++++++++++++++++++ 2 files changed, 24 insertions(+) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index bbe1223cd992..e749f7f09314 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -846,6 +846,8 @@ struct mm_struct { */ int mm_lock_seq; #endif + /* Counter incremented each time mm gets write-locked */ + unsigned long mm_wr_seq; unsigned long hiwater_rss; /* High-watermark of RSS usage */ diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h index 8d38dcb6d044..0197079cb6fe 100644 --- a/include/linux/mmap_lock.h +++ b/include/linux/mmap_lock.h @@ -106,6 +106,8 @@ static inline void mmap_write_lock(struct mm_struct *mm) { __mmap_lock_trace_start_locking(mm, true); down_write(&mm->mmap_lock); + /* Pairs with ACQUIRE semantics in mmap_write_seq_read */ + smp_store_release(&mm->mm_wr_seq, mm->mm_wr_seq + 1); __mmap_lock_trace_acquire_returned(mm, true, true); } @@ -113,6 +115,8 @@ static inline void mmap_write_lock_nested(struct mm_struct *mm, int subclass) { __mmap_lock_trace_start_locking(mm, true); down_write_nested(&mm->mmap_lock, subclass); + /* Pairs with ACQUIRE semantics in mmap_write_seq_read */ + smp_store_release(&mm->mm_wr_seq, mm->mm_wr_seq + 1); __mmap_lock_trace_acquire_returned(mm, true, true); } @@ -122,6 +126,10 @@ static inline int mmap_write_lock_killable(struct mm_struct *mm) __mmap_lock_trace_start_locking(mm, true); ret = down_write_killable(&mm->mmap_lock); + if (!ret) { + /* Pairs with ACQUIRE semantics in mmap_write_seq_read */ + smp_store_release(&mm->mm_wr_seq, mm->mm_wr_seq + 1); + } __mmap_lock_trace_acquire_returned(mm, true, ret == 0); return ret; } @@ -140,6 +148,20 @@ static inline void mmap_write_downgrade(struct mm_struct *mm) downgrade_write(&mm->mmap_lock); } +static inline unsigned long mmap_write_seq_read(struct mm_struct *mm) +{ + /* Pairs with RELEASE semantics in mmap_write_lock */ + return smp_load_acquire(&mm->mm_wr_seq); +} + +static inline void mmap_write_seq_record(struct mm_struct *mm, + unsigned long *mm_wr_seq) +{ + mmap_assert_locked(mm); + /* Nobody can concurrently modify since we hold the mmap_lock */ + *mm_wr_seq = mm->mm_wr_seq; +} + static inline void mmap_read_lock(struct mm_struct *mm) { __mmap_lock_trace_start_locking(mm, false); From patchwork Tue Jan 23 23:10:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13528264 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9138354BC8 for ; Tue, 23 Jan 2024 23:10:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706051424; cv=none; b=AvJm+8FU9bJdP80WMSSELPb13Dss5x3b+AX1nf2k6Q5CAeCMGm9CpVoZVDNLQd1huf7Qe8MUwXdeNdWlv2VBOZBdl0eD2casZfhnIXkns7L+FUDtQltRYbAVBKKr5AnQnbhC+05HT+Ts0Pv2/po0sfruZg7h5OuoRL8KfwIBnBU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706051424; c=relaxed/simple; bh=YUTCdLYl4R4AJ76ENTbaMi/xKnesbkwZdIOzekxlmgs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=F1QlX0wW0KrJZEWGUlFAdjstuPBoXxcwv+B9Q0/os9sbuiVvlJ10bkAqA3pxXAFxVfTXImXM0CWJHtRh2yhUxfq2uYRBTqzHz36Ds/4hn7uO95pvnXY6pz+AxbJI00ZHick7fOoBQnHt9+aDl1mm3iVFGceLmgr5U9gEcJMIUHk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=2uLxQkGR; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="2uLxQkGR" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-6002a655cc1so28616217b3.1 for ; Tue, 23 Jan 2024 15:10:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1706051421; x=1706656221; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=iOH4eD/kP+yC8NUPTXlai05Ia1nTXRgcymGMGu85eLQ=; b=2uLxQkGRny5uq0JvKSVj5IZPnu0tRvgnWxVQ/KqfkLMSzHx0ZXEAnwLBEvYF4IdIPO 95/WgvX0bVIhMn3y/MgEz786ScJMFnOOkehXdTs6bHfa/6GBlzC8ZpIQbM6mrPT1g1Pp 9awsEkYByop65jUBBX7xW0Cuiq/BA10OGB1uBS9XTZditpDvUzlUkmQ5a3J/A+FzXr/m gvILAGK/7Sud09K1XE5J5AYB2ktDsW/BmrafBv6mc9CZVCVvLRd8qwLhdClrTCuyjZju SkkZ1LwMKSi/qsvr+iscdmCxJgnrUXHOrEA8j5iEiztzOMzBVW3u45Pl5Gmy481aTr2z Bj3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706051421; x=1706656221; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=iOH4eD/kP+yC8NUPTXlai05Ia1nTXRgcymGMGu85eLQ=; b=l6tI2hdKo/EUNInBxVTnLAYy9gU4kOc6ZkvYfVRX0oESTl0xj0Cf5vHZLYnSHAv9Z0 2V2CoE4A7Gg7jDUyGBEjgnRhHz+bXCXUc8rLrGwlhLWy0r9xNx9eUtkVbd7JOrqq994T n/nW4IwhtrWnVvG1ZWkD3OUvGNW7RGPkSLoSZHdSuC9qhwkA5cEUnuEbffHsWuSEQok0 UyyXWTq9iypjxqq5JoEc3+GsBFP1bj10GCfeBg+ALCaMFOZuLNMXPJRLaDWmgKdzZeVB cyb9kDXlze1eSYs8jLbLAqJkZry3NTuO91W2dfYefSN+QOL1mIYFXiOwwB17+y+ca8ch YLYw== X-Gm-Message-State: AOJu0Yw+VPMwOwN0JEpO6vo4sH/ATX8qFMssq61MxrsFKl9vvXZLmr8x Bh9oVu/1rR/vSYtDstfxzu9ehTGe3mFlF6Oh25qnrtOPxuAUssqJNTJicWAzikk1gEKigQ8eak7 QvA== X-Google-Smtp-Source: AGHT+IFELlpA6dr+hh0ohU9jI75g0Qo+vt9bgKXouAI4UTf2K5w/MytBcB95Yhp71v6eqOEpu4Ziung4r/A= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:8fc3:c34f:d407:388]) (user=surenb job=sendgmr) by 2002:a25:ce11:0:b0:dbd:b909:f090 with SMTP id x17-20020a25ce11000000b00dbdb909f090mr373700ybe.11.1706051421519; Tue, 23 Jan 2024 15:10:21 -0800 (PST) Date: Tue, 23 Jan 2024 15:10:14 -0800 In-Reply-To: <20240123231014.3801041-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240123231014.3801041-1-surenb@google.com> X-Mailer: git-send-email 2.43.0.429.g432eaa2c6b-goog Message-ID: <20240123231014.3801041-3-surenb@google.com> Subject: [PATCH v2 3/3] mm/maps: read proc/pid/maps under RCU From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: viro@zeniv.linux.org.uk, brauner@kernel.org, jack@suse.cz, dchinner@redhat.com, casey@schaufler-ca.com, ben.wolsieffer@hefring.com, paulmck@kernel.org, david@redhat.com, avagin@google.com, usama.anjum@collabora.com, peterx@redhat.com, hughd@google.com, ryan.roberts@arm.com, wangkefeng.wang@huawei.com, Liam.Howlett@Oracle.com, yuzhao@google.com, axelrasmussen@google.com, lstoakes@gmail.com, talumbau@google.com, willy@infradead.org, vbabka@suse.cz, mgorman@techsingularity.net, jhubbard@nvidia.com, vishal.moola@gmail.com, mathieu.desnoyers@efficios.com, dhowells@redhat.com, jgg@ziepe.ca, sidhartha.kumar@oracle.com, andriy.shevchenko@linux.intel.com, yangxingui@huawei.com, keescook@chromium.org, sj@kernel.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, kernel-team@android.com, surenb@google.com With maple_tree supporting vma tree traversal under RCU and per-vma locks making vma access RCU-safe, /proc/pid/maps can be read under RCU and without the need to read-lock mmap_lock. However vma content can change from under us, therefore we make a copy of the vma and we pin pointer fields used when generating the output (currently only vm_file and anon_name). Afterwards we check for concurrent address space modifications, wait for them to end and retry. That last check is needed to avoid possibility of missing a vma during concurrent maple_tree node replacement, which might report a NULL when a vma is replaced with another one. While we take the mmap_lock for reading during such contention, we do that momentarily only to record new mm_wr_seq counter. This change is designed to reduce mmap_lock contention and prevent a process reading /proc/pid/maps files (often a low priority task, such as monitoring/data collection services) from blocking address space updates. Note that this change has a userspace visible disadvantage: it allows for sub-page data tearing as opposed to the previous mechanism where data tearing could happen only between pages of generated output data. Since current userspace considers data tearing between pages to be acceptable, we assume is will be able to handle sub-page data tearing as well. Signed-off-by: Suren Baghdasaryan --- Changes since v1 [1]: - Fixed CONFIG_ANON_VMA_NAME=n build by introducing anon_vma_name_{get|put}_if_valid, per SeongJae Park - Fixed misspelling of get_vma_snapshot() [1] https://lore.kernel.org/all/20240122071324.2099712-3-surenb@google.com/ fs/proc/internal.h | 2 + fs/proc/task_mmu.c | 113 +++++++++++++++++++++++++++++++++++--- include/linux/mm_inline.h | 18 ++++++ 3 files changed, 126 insertions(+), 7 deletions(-) diff --git a/fs/proc/internal.h b/fs/proc/internal.h index a71ac5379584..e0247225bb68 100644 --- a/fs/proc/internal.h +++ b/fs/proc/internal.h @@ -290,6 +290,8 @@ struct proc_maps_private { struct task_struct *task; struct mm_struct *mm; struct vma_iterator iter; + unsigned long mm_wr_seq; + struct vm_area_struct vma_copy; #ifdef CONFIG_NUMA struct mempolicy *task_mempolicy; #endif diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 3f78ebbb795f..0d5a515156ee 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -126,11 +126,95 @@ static void release_task_mempolicy(struct proc_maps_private *priv) } #endif -static struct vm_area_struct *proc_get_vma(struct proc_maps_private *priv, - loff_t *ppos) +#ifdef CONFIG_PER_VMA_LOCK + +static const struct seq_operations proc_pid_maps_op; + +/* + * Take VMA snapshot and pin vm_file and anon_name as they are used by + * show_map_vma. + */ +static int get_vma_snapshot(struct proc_maps_private *priv, struct vm_area_struct *vma) +{ + struct vm_area_struct *copy = &priv->vma_copy; + int ret = -EAGAIN; + + memcpy(copy, vma, sizeof(*vma)); + if (copy->vm_file && !get_file_rcu(©->vm_file)) + goto out; + + if (!anon_vma_name_get_if_valid(copy)) + goto put_file; + + if (priv->mm_wr_seq == mmap_write_seq_read(priv->mm)) + return 0; + + /* Address space got modified, vma might be stale. Wait and retry. */ + rcu_read_unlock(); + ret = mmap_read_lock_killable(priv->mm); + mmap_write_seq_record(priv->mm, &priv->mm_wr_seq); + mmap_read_unlock(priv->mm); + rcu_read_lock(); + + if (!ret) + ret = -EAGAIN; /* no other errors, ok to retry */ + + anon_vma_name_put_if_valid(copy); +put_file: + if (copy->vm_file) + fput(copy->vm_file); +out: + return ret; +} + +static void put_vma_snapshot(struct proc_maps_private *priv) +{ + struct vm_area_struct *vma = &priv->vma_copy; + + anon_vma_name_put_if_valid(vma); + if (vma->vm_file) + fput(vma->vm_file); +} + +static inline bool needs_mmap_lock(struct seq_file *m) +{ + /* + * smaps and numa_maps perform page table walk, therefore require + * mmap_lock but maps can be read under RCU. + */ + return m->op != &proc_pid_maps_op; +} + +#else /* CONFIG_PER_VMA_LOCK */ + +/* Without per-vma locks VMA access is not RCU-safe */ +static inline bool needs_mmap_lock(struct seq_file *m) { return true; } + +#endif /* CONFIG_PER_VMA_LOCK */ + +static struct vm_area_struct *proc_get_vma(struct seq_file *m, loff_t *ppos) { + struct proc_maps_private *priv = m->private; struct vm_area_struct *vma = vma_next(&priv->iter); +#ifdef CONFIG_PER_VMA_LOCK + if (vma && !needs_mmap_lock(m)) { + int ret; + + put_vma_snapshot(priv); + while ((ret = get_vma_snapshot(priv, vma)) == -EAGAIN) { + /* lookup the vma at the last position again */ + vma_iter_init(&priv->iter, priv->mm, *ppos); + vma = vma_next(&priv->iter); + } + + if (ret) { + put_vma_snapshot(priv); + return NULL; + } + vma = &priv->vma_copy; + } +#endif if (vma) { *ppos = vma->vm_start; } else { @@ -169,12 +253,20 @@ static void *m_start(struct seq_file *m, loff_t *ppos) return ERR_PTR(-EINTR); } + /* Drop mmap_lock if possible */ + if (!needs_mmap_lock(m)) { + mmap_write_seq_record(priv->mm, &priv->mm_wr_seq); + mmap_read_unlock(priv->mm); + rcu_read_lock(); + memset(&priv->vma_copy, 0, sizeof(priv->vma_copy)); + } + vma_iter_init(&priv->iter, mm, last_addr); hold_task_mempolicy(priv); if (last_addr == -2UL) return get_gate_vma(mm); - return proc_get_vma(priv, ppos); + return proc_get_vma(m, ppos); } static void *m_next(struct seq_file *m, void *v, loff_t *ppos) @@ -183,7 +275,7 @@ static void *m_next(struct seq_file *m, void *v, loff_t *ppos) *ppos = -1UL; return NULL; } - return proc_get_vma(m->private, ppos); + return proc_get_vma(m, ppos); } static void m_stop(struct seq_file *m, void *v) @@ -195,7 +287,10 @@ static void m_stop(struct seq_file *m, void *v) return; release_task_mempolicy(priv); - mmap_read_unlock(mm); + if (needs_mmap_lock(m)) + mmap_read_unlock(mm); + else + rcu_read_unlock(); mmput(mm); put_task_struct(priv->task); priv->task = NULL; @@ -283,8 +378,10 @@ show_map_vma(struct seq_file *m, struct vm_area_struct *vma) start = vma->vm_start; end = vma->vm_end; show_vma_header_prefix(m, start, end, flags, pgoff, dev, ino); - if (mm) - anon_name = anon_vma_name(vma); + if (mm) { + anon_name = needs_mmap_lock(m) ? anon_vma_name(vma) : + anon_vma_name_get_rcu(vma); + } /* * Print the dentry name for named mappings, and a @@ -338,6 +435,8 @@ show_map_vma(struct seq_file *m, struct vm_area_struct *vma) seq_puts(m, name); } seq_putc(m, '\n'); + if (anon_name && !needs_mmap_lock(m)) + anon_vma_name_put(anon_name); } static int show_map(struct seq_file *m, void *v) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index bbdb0ca857f1..a4a644fe005e 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -413,6 +413,21 @@ static inline bool anon_vma_name_eq(struct anon_vma_name *anon_name1, struct anon_vma_name *anon_vma_name_get_rcu(struct vm_area_struct *vma); +/* + * Takes a reference if anon_vma is valid and stable (has references). + * Fails only if anon_vma is valid but we failed to get a reference. + */ +static inline bool anon_vma_name_get_if_valid(struct vm_area_struct *vma) +{ + return !vma->anon_name || anon_vma_name_get_rcu(vma); +} + +static inline void anon_vma_name_put_if_valid(struct vm_area_struct *vma) +{ + if (vma->anon_name) + anon_vma_name_put(vma->anon_name); +} + #else /* CONFIG_ANON_VMA_NAME */ static inline void anon_vma_name_get(struct anon_vma_name *anon_name) {} static inline void anon_vma_name_put(struct anon_vma_name *anon_name) {} @@ -432,6 +447,9 @@ struct anon_vma_name *anon_vma_name_get_rcu(struct vm_area_struct *vma) return NULL; } +static inline bool anon_vma_name_get_if_valid(struct vm_area_struct *vma) { return true; } +static inline void anon_vma_name_put_if_valid(struct vm_area_struct *vma) {} + #endif /* CONFIG_ANON_VMA_NAME */ static inline void init_tlb_flush_pending(struct mm_struct *mm)