From patchwork Thu Jan 9 02:30:10 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13931808 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 77ECFE7719A for ; Thu, 9 Jan 2025 02:30:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 005916B0089; Wed, 8 Jan 2025 21:30:35 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id ED02B6B008A; Wed, 8 Jan 2025 21:30:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CFCDC6B008C; Wed, 8 Jan 2025 21:30:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id A71FA6B0089 for ; Wed, 8 Jan 2025 21:30:34 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 32EA6141607 for ; Thu, 9 Jan 2025 02:30:34 +0000 (UTC) X-FDA: 82986334788.30.FD9CC29 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) by imf14.hostedemail.com (Postfix) with ESMTP id 57D81100002 for ; Thu, 9 Jan 2025 02:30:32 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=TVI400mt; spf=pass (imf14.hostedemail.com: domain of 3xjR_ZwYKCG8fheRaOTbbTYR.PbZYVahk-ZZXiNPX.beT@flex--surenb.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=3xjR_ZwYKCG8fheRaOTbbTYR.PbZYVahk-ZZXiNPX.beT@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736389832; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=L2pdotIvyMZQNnGmgTfsGf0xKAYx/XKZavKJSJfbFgY=; b=EngorJgFA49apk4g4X2iODX9wG15bWQpq6UpEerug1AJ6uFG0XQ3XcQdKvYVOmvc6p3Ei0 gy+fJ8JhBXstlx2/1dmq1OtOWY/Zr2+/LPxyFD9H4F6J/cfGWwtAZYNWLv1P5NjxTi0aWj Pgphln9Ei2fUQBVanNJqAu5Mfu2i1V4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736389832; a=rsa-sha256; cv=none; b=qBcn4XC/OyPx7S7zhj/H5tqgS7R2UL7k3EvU4IznCeDyVJrDjEPDH5+Wq4tm3+oJaZYeYr hMi7TLtgzeIpIZDb52q5mTpAuJfDGdd42BCP8z2wK2+O1S8jywZtkLY1JqCTEBcRyCXIjQ rDJS6q2tvFV0MNw/03rH890ICaJUiX0= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=TVI400mt; spf=pass (imf14.hostedemail.com: domain of 3xjR_ZwYKCG8fheRaOTbbTYR.PbZYVahk-ZZXiNPX.beT@flex--surenb.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=3xjR_ZwYKCG8fheRaOTbbTYR.PbZYVahk-ZZXiNPX.beT@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2ee6dccd3c9so748490a91.3 for ; Wed, 08 Jan 2025 18:30:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736389831; x=1736994631; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=L2pdotIvyMZQNnGmgTfsGf0xKAYx/XKZavKJSJfbFgY=; b=TVI400mt5eTyM3PkAICAOTzt0RM2wdM0XMWpdOp+f3ldDZt7WoJisolVYjNzYkU8Al lHDDeiqQ9mpqivBzycXURMiAcjyPsvlvQ3sMxCV2YkwfieK9Zok/Vz2p1PCCJDBYuhnb uUdQ/HNenSISXA7FxU4jzIpSaEiZt5FUy9D8jTA1ZRu2hw0eAi6EDjjau48+psYm6uMC RSQRnpZccrBCNbz/eYyjEQplbXEK5XUYs+tQSQ2zZhcQkXDAf7coWoQ2v8vI0U+oAfvm 2wUVznxcdW5SyllAsKxgcJImAtQ6YV9fTQ1hJ/7D5CnGhb/jWPTpuC3Dut9wAg18FUnj 43LQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736389831; x=1736994631; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=L2pdotIvyMZQNnGmgTfsGf0xKAYx/XKZavKJSJfbFgY=; b=f6qQLBdDEiU9HAL5G3NC+6nGbkC2SABkp+zl440uZ/jt9e04HjaUbOoLON9VJ2ylZh fdxLunKNnJicLhl7c3NhTlN+n7765YDO+1MTlS4zVOZO55i4ZPMU/Oz0w6MOp4rk93Xp jeUTFwY+eXA12hrahsYT2qijzzCATUV0B0de9iGZ6MsWtfdWhkpu2SPGnyvDINiEtCYI Yhx3WYl/BgZOxlT82Du3fJzVne7Ya2v+WBxb5wMqGPaA7rGtgXUOVij9Roo0cbu5uejf uMohq5KJ8vxMX9YXl1yY0VigQsQI7slFpUWJG1f2Lr8hZ6EFpufRwB18O5y8TgisqPcc fLtw== X-Forwarded-Encrypted: i=1; AJvYcCXfiHONEwQYJBC+ZftdH3C0jPgE+huAd+5pzadocc6rvouShImpJ0cToMjWIFszwPdbIHBQW6kipw==@kvack.org X-Gm-Message-State: AOJu0Yw2Kwhf50JXnDHycFhRkfqVBy0YLBdXhkQaLj1ZpEWFg0QHkl1M 7MNbDN28gDKzcwLBmhHDmBbKkuGq6fqH9FKa0Vp49eQFNSdd+hZG25v1R0wRgX+igIkQ22R/Xaj qEQ== X-Google-Smtp-Source: AGHT+IHWgi2ldN3a83aKZv2dDntq/xf/7vQGhigjGpu6Kf8lGzADe54dCpxgONH1NIDSqnEDQ0UO7ZABvv0= X-Received: from pjbnb3.prod.google.com ([2002:a17:90b:35c3:b0:2ee:3128:390f]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:2748:b0:2f1:30c8:6e75 with SMTP id 98e67ed59e1d1-2f5490e89e0mr6525677a91.32.1736389830966; Wed, 08 Jan 2025 18:30:30 -0800 (PST) Date: Wed, 8 Jan 2025 18:30:10 -0800 In-Reply-To: <20250109023025.2242447-1-surenb@google.com> Mime-Version: 1.0 References: <20250109023025.2242447-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109023025.2242447-2-surenb@google.com> Subject: [PATCH v8 01/16] mm: introduce vma_start_read_locked{_nested} helpers From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com, "Liam R. Howlett" X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 57D81100002 X-Stat-Signature: zmauy35dabaomxm4n53sqadpneynuwg6 X-Rspam-User: X-HE-Tag: 1736389832-101360 X-HE-Meta: U2FsdGVkX19dRvze2aS1v0oMjXQ6+lEbRCXmBewb+pV2MtYIjbb4om3sJm77GPI1SCpvsIVyen0hJWf9zfVWTNCZRcY+uYwlC7sYt8ff7LHLqfKQJxSgHqpzgw0ULaaMorWcaljdYE6l2sAGAKmp0Fdmswi/1VN2mu8rHmI4/B35FwNVngUYm+KEpasNFNR0BhOldwg8amUBGxvQg1T72zIJ4GpmFi3EmJR8lMbU1VckL1mKOEYwmIlBuCU5UsTMY0JPWnuiZ6/qYVpOTDS5+cob7vms9h+cR/65HFB/p8hcIpeU155JhX0lozdcJR6YWGRFf/1PxtDrCQ95hM0ZiOoZBw04esob1qjcWBZx3LJwUUYlLEFmlFeYlseZTAwjK/8Pi5CmpoXgpgufFG0bHu4K0V4H837sXSG9A2duFkKv30yw/Ekd+96/ajQA+Ooq3qB1OABVBWR1JILsWR/rk1WyWx7c9s6GU3lkEQlHbaD5swd/KaRx+wDNbyYaEowBj2ni1z0YpSRzPz02sdIWi6vI2tSEbIyjBHYNSeIR8DXfo4+uvr/0l3LY0t2JXtkrI8OXILgUqbq9SejHIYSYjOdUs3iQbGjO+yEgEAusaGlvBWnhCfZt+cmVuykf9cSXmoO9xNF4N3kdjQ/zwGXq14akO6F0uh7r+pzrsrwSXcTynxlUeAEknVswPCVVcEWz3a6ZDWNzXTQM+ZRWC+w1OycWrvpst4Tx8gF0/MVnrrk6XUZdzwokODXxH9J9VO70Y2YsDRbhfMSVwz9vyVhsErkTe+Yiki0P8ZdWF6G3l7xndVa4FuGmhpF+SAdhQgtTavzJp8hAdfytAhFIGAi7egaPwpjwLkXqBgL4JFHHl2cLflcTrBhAfSBEpC3ARRXeUUP5Oai9dTyBG76uE0HcuvCG91f2dfZdQ9rNP6Cowqtj0+khRWDNcBpU0UKYkHUOgSGNPtsPDf+rbb/OG3W xgSKwIbW ekBgQ1OXyANaNbK7n0y0QJNIL7hAfkwF9KqmTGHwW8PY/7j1OTl8qh7rM5apMNqfpPywcfJ63IeDN+sqJXgKKoU47ZPfVF4YjttbAwP92dKMeXC1O1e97BE5EuaRvcXbgJnP/5noZbBSbi9gmAir6W2u2PTyaKWP/snXf00BwCuDdwKC/8YTkyJZuLCxDDgFVAnkP/MvSiAQCc8WlmQoGNwFPvLSeU/0H0hiB2M9EPZCpFTixcmM+318Mwnr+B/y9DB21G94oQTJCQ9VKnm+utDwa5UwNjq44Fgp9E6rBD5KcJTLRkqe9B8idN4CpQpWZPrW+H4XnQ9IyJJQpN8H4hjv4QvvieVXiT4sHn4Dg00RqZjFOf2H9YCwS8H/I/+cSS6Lw2X14bwnUIrhcrZ0Jxgi/J/cnxy5h3nU5TC0RpnY8dh9EuYmLW3vUgA4U/9iBQqhh2VTSiZyZ2SWtbklkZsvrqXlT2vXXTCOZ8ihCHM4nfj16YM+Pqgav256in67n3Ul5yHvIkV04cLhrimFX+CdWEGNtcO312onWh5Kq971/SwcMeV0hTRmCmvenmyf4ygk2sscnm9/b7Rn8lLI5NRnSoyCn4QTC8hTgVgDtbudTHBPUpBNbdIbYVbkDW1xn10776R1aPTrU2QB6yOLTvKT8WP0JEudht8UzM7w+hXBf6JMf3N1kNRSPd5u2kqFN0UKkN3ypqwPKy3TJhigLBiMZtVlFffz8h6UZ2u/wJ9dhGqU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Introduce helper functions which can be used to read-lock a VMA when holding mmap_lock for read. Replace direct accesses to vma->vm_lock with these new helpers. Signed-off-by: Suren Baghdasaryan Reviewed-by: Lorenzo Stoakes Reviewed-by: Davidlohr Bueso Reviewed-by: Shakeel Butt Reviewed-by: Vlastimil Babka Reviewed-by: Liam R. Howlett --- include/linux/mm.h | 24 ++++++++++++++++++++++++ mm/userfaultfd.c | 22 +++++----------------- 2 files changed, 29 insertions(+), 17 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 57b9e4dc4724..b040376ee81f 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -735,6 +735,30 @@ static inline bool vma_start_read(struct vm_area_struct *vma) return true; } +/* + * Use only while holding mmap read lock which guarantees that locking will not + * fail (nobody can concurrently write-lock the vma). vma_start_read() should + * not be used in such cases because it might fail due to mm_lock_seq overflow. + * This functionality is used to obtain vma read lock and drop the mmap read lock. + */ +static inline void vma_start_read_locked_nested(struct vm_area_struct *vma, int subclass) +{ + mmap_assert_locked(vma->vm_mm); + down_read_nested(&vma->vm_lock->lock, subclass); +} + +/* + * Use only while holding mmap read lock which guarantees that locking will not + * fail (nobody can concurrently write-lock the vma). vma_start_read() should + * not be used in such cases because it might fail due to mm_lock_seq overflow. + * This functionality is used to obtain vma read lock and drop the mmap read lock. + */ +static inline void vma_start_read_locked(struct vm_area_struct *vma) +{ + mmap_assert_locked(vma->vm_mm); + down_read(&vma->vm_lock->lock); +} + static inline void vma_end_read(struct vm_area_struct *vma) { rcu_read_lock(); /* keeps vma alive till the end of up_read */ diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 11b7eb3c8a28..a03c6f1ceb9e 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -84,16 +84,8 @@ static struct vm_area_struct *uffd_lock_vma(struct mm_struct *mm, mmap_read_lock(mm); vma = find_vma_and_prepare_anon(mm, address); - if (!IS_ERR(vma)) { - /* - * We cannot use vma_start_read() as it may fail due to - * false locked (see comment in vma_start_read()). We - * can avoid that by directly locking vm_lock under - * mmap_lock, which guarantees that nobody can lock the - * vma for write (vma_start_write()) under us. - */ - down_read(&vma->vm_lock->lock); - } + if (!IS_ERR(vma)) + vma_start_read_locked(vma); mmap_read_unlock(mm); return vma; @@ -1490,14 +1482,10 @@ static int uffd_move_lock(struct mm_struct *mm, mmap_read_lock(mm); err = find_vmas_mm_locked(mm, dst_start, src_start, dst_vmap, src_vmap); if (!err) { - /* - * See comment in uffd_lock_vma() as to why not using - * vma_start_read() here. - */ - down_read(&(*dst_vmap)->vm_lock->lock); + vma_start_read_locked(*dst_vmap); if (*dst_vmap != *src_vmap) - down_read_nested(&(*src_vmap)->vm_lock->lock, - SINGLE_DEPTH_NESTING); + vma_start_read_locked_nested(*src_vmap, + SINGLE_DEPTH_NESTING); } mmap_read_unlock(mm); return err; From patchwork Thu Jan 9 02:30:11 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13931809 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B5C9E77188 for ; Thu, 9 Jan 2025 02:30:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EFC0F6B008A; Wed, 8 Jan 2025 21:30:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E849D6B0092; Wed, 8 Jan 2025 21:30:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C86376B0093; Wed, 8 Jan 2025 21:30:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 982226B008A for ; Wed, 8 Jan 2025 21:30:36 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 44D421C860A for ; Thu, 9 Jan 2025 02:30:36 +0000 (UTC) X-FDA: 82986334872.30.C10D1DA Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) by imf01.hostedemail.com (Postfix) with ESMTP id 76B4B40003 for ; Thu, 9 Jan 2025 02:30:34 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=oML3B2an; spf=pass (imf01.hostedemail.com: domain of 3yTR_ZwYKCHIikhUdRWeeWbU.SecbYdkn-ccalQSa.ehW@flex--surenb.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=3yTR_ZwYKCHIikhUdRWeeWbU.SecbYdkn-ccalQSa.ehW@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736389834; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=n13GufSG5JFan3cWwE4IJTcm9x/mZeh3KsDD3Zuaiv0=; b=gMQmemQ30Gd6wW5Qa3pQsuzhtz7djfjsviaPSIjxzOjJaGy5CTSwp59nH6/e2C68Fm/+W0 ecg2M7+3IHU6zhrwwfghlT1dI92qVQTXmLmg6UIouoL5WJJE+6MMLCXMi00O9GG5/Axhf3 cOAegHhWlVGHPuDpK8JPMSezXxJHQ5Y= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736389834; a=rsa-sha256; cv=none; b=LXxc1egEai5a81FtS9BDj0h8L98viCBA3KjQ/13B5Q76PL60PHyloseB/rjEpJ6jLRVMhi o0weLoS2aLG8gm4TwL301xt+IbLnpoKSKcsknFIvijYuYMvY4J26Wiz33KS1rlUV+x8GIJ gnchXhhtq9kBAe75TtK4pakj148V1n4= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=oML3B2an; spf=pass (imf01.hostedemail.com: domain of 3yTR_ZwYKCHIikhUdRWeeWbU.SecbYdkn-ccalQSa.ehW@flex--surenb.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=3yTR_ZwYKCHIikhUdRWeeWbU.SecbYdkn-ccalQSa.ehW@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2efc3292021so1160926a91.1 for ; Wed, 08 Jan 2025 18:30:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736389833; x=1736994633; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=n13GufSG5JFan3cWwE4IJTcm9x/mZeh3KsDD3Zuaiv0=; b=oML3B2anS4X52JojWsIQuKSg697Zbom5cVZlF6ET/g+ICFcYm8tcqtOmHGQz0rSpGe 6bnGZhOUwdSXNxFfSAt5V3Ronorv6NwQsI39NHdQepT/bmfl24EJVL8LKudAwm9hKHwW /3NuViZ5XKJYWf4j9XAkS2U+keNP0CQ+bZvDD9B1/phfHXHOQyYD0R52v7pyyE+6EgNv Htvq4RqDJ0kt/C9m+hRrQgzwIzwe0Ott6Uq+rAGcbcwmeYdqCNw4LtjIteLdbRlWU5kQ 81gK9bl0NqP2gjkFEcO78g0wAgMbN2PVB+HZufmbmwbzLbk8f3KXkRSwtPqslLsjjKYi Temg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736389833; x=1736994633; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=n13GufSG5JFan3cWwE4IJTcm9x/mZeh3KsDD3Zuaiv0=; b=R735lP9RV9S2KmBz40TG6ZMw3suK9r4KTh7Fc+BrHAXJOqBCVDnr5EiRVCSMrMkSnQ IFbDHhVebRhpQWrWryhXEhrw/Gpc41cVHPaHARKnBzfxelN8qj4h81ozMMCPPt9/vrkh pAEW2OwXi0MWWpiEEsnSY48wGSF7E/qyAO/QBCM4mZ/T0Q3eV1zqhzRBYjI4dzxrbtzY J+/vucus7XJlfvGEdOP65VdTtG3jw6HXzQYRLAMJXFku10PG4CO1g/WHz5WCcp3qgbd1 87WjHMWDEewzhLuHiUJuZb2qic5kXAx1FsqKyNM9g18pDHu+anpprTTXozVbZMvrvTW0 yEsg== X-Forwarded-Encrypted: i=1; AJvYcCVzw/ZVZIJqBozJ8PmgQc4iQ984yG7QcKjgvKDxkLbuH7MXohjfzjYcuCZPmrsGBM4CxE9ou8/BRg==@kvack.org X-Gm-Message-State: AOJu0YzfjadgSBdBQ4dV8IWeS07oJaNB2q/8YvhqKGlfjla7+rHfGNTg ys9QW+Pq7veW6E2lMIGSL3Wi4w0/xyjoft6VoKUdmKcWWq8RX/oEbccEMsprg0eINSFTkGfpR4d +sA== X-Google-Smtp-Source: AGHT+IGBuBHPJFwa+WnmyRNuk6q4QQ/D91Vih/bjyZrS7FqQF0qJtxsRqNnpS2q9DSPjt8+5JEu4B+raBhE= X-Received: from pjbsu4.prod.google.com ([2002:a17:90b:5344:b0:2f4:47fc:7f17]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90a:c88f:b0:2f4:434d:c7ed with SMTP id 98e67ed59e1d1-2f548f34ee2mr8134361a91.16.1736389833219; Wed, 08 Jan 2025 18:30:33 -0800 (PST) Date: Wed, 8 Jan 2025 18:30:11 -0800 In-Reply-To: <20250109023025.2242447-1-surenb@google.com> Mime-Version: 1.0 References: <20250109023025.2242447-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109023025.2242447-3-surenb@google.com> Subject: [PATCH v8 02/16] mm: move per-vma lock into vm_area_struct From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com, "Liam R. Howlett" X-Rspamd-Queue-Id: 76B4B40003 X-Rspam-User: X-Rspamd-Server: rspam07 X-Stat-Signature: faxa3wrmbj97k1rp3g3m849eufjjcbfy X-HE-Tag: 1736389834-230316 X-HE-Meta: U2FsdGVkX1+zmwkzLQ7zGNYl7VObiqTowku1qiYjt1V1yXHozwd/gFH5e81jTTa6xIc9U0AbZyhqeSn8h+y1GgZAoq51LIrwFokkIOwg85itxJEX5a0/zMC/lRBSdMKQZ+EHO7bBPsYWhVpRxLUa+eBx/n8E9hFOiWlCMRwEt/9qFWT37qc6YsJUAmit8wDf0LytHBvKVnHAmY3Yvmtg/d4yEO1TeT0+a7WcNI3MvSzLqqHydwgvn4iaPfjIHf5Fo32XEDpvRD8KYouFEPO02mZ+7VUXTvPt0QGfkW4LrOiVxjT4jeUarm6scAsu86Yu3k0QZ+vI3nN5oWd04S5VsBJXpQ9RnzepgAhHcncH9UVILwJuqUDugBIhf173ooB5GalgalqhycCrTUAHjkZw+g83YINhOlQQZN59iNAEBMj7TmfwOWv7oILH0rCKjoUN4eRHA/EUBpBUVz6P7DpfSV1wR6By/7ygu/pHjgpnpGIZ//MksXE779kbieRTzo8y4OLKaDH1U3MbvMTEpPa4+TfdqgdpoSObG8yRLlkWfF9b4FxqiUgEjfo1AmmrrtLKxlutFVhEXGrLwKbeG82tjKhE2oYtmwL9K0vWHtpeGOZcrYpARaifqUeMgIdlTvsD8k2wscP7xx4FsvcFHwpXAMSes1R3QfykpB+NmoYaF2DBjKIP1jYH6C99m0Va30d90ELZGeVGpbe9fC1AX58oSdLYxY9TibJipQmDgnC7nMZQId5cr54XYUU0dzh4fxBSXNh3PAg3ao8RENlQ+ZF8vaixPjze8i97yO6dM7Dj6QxcNOP7b7tT+lufvFClJHx5SlQCJxPVdEBUpfay/CoN8rq8r/tLjS+GfljDVjOCaT4M5LJeSRT0iTWEkuHnW9VucmzaZ0gkr4bu5NIP2vTJ6cRLpk5V0foPMAAozn3HJa+CIh65Y5cG2KUwLw7OalyZb6kCIhq4CHRXLt8s164 KvERLI+i PUzHUjJXPEcZxXN3Hi9o/cXAJCfO88z20UGVJZeB3Xsfrye2T/jp7S+5M3uIyO6smDyfhpnBpOB/iMBR6zfBPOn64oRuSw0ihAL2pUbj/ZNb0I44qydPXSEiW5cR4k6d2xAmUOaWQOT3fTcBwKvtr9dYbpkwXyqny4BM8Ebxfup6WKdIzN9ZcEAH2ve0gweVhPufncrOgheteEB2bk2LzEBr7qcpDvjZin97tCRKm5OoYoo8zDImzXoh2rtEsZx/NhOw39Ug9zTIIMuC8cPe1OikMPVX4Hk7EA+MGPqPM9UHfG5WTd9uWQzJZtjT3Mh0r/Wvm3feGvhUUFY05X+43uCGCCdhhRapDu8PDMJuW1eCsHVujkH2nZ/Ps8YtN6xEWrn8v6DQoomg4NvsKMqW12RIAyxrTThD38JAQHVYTneJXtAPbqqUq/vDEIVU9tkPLsw91hX135KcnQEZX3vHdzvFvyVP3GnZb+LdEhUh5U+r1/cvXt8XWQK7rvMCVOsoH53xm9qpQDLrKEVHqVcDL+vMwwk1t+8sKa8oYRGRdfG9qYHUEiVzcbriHr4YtqiewcX/73267aBk5AW3lbfOrTP2R2jVm6nJ0+uYjLLsyNiLs/O3G26FMZ1tuKbyYnOxrvuoBjFXDSjAKWkKREDEDONlY8VV2Vg9DMIOEdJ1nEryZ6Hp1DpKikc8zWlK/aAN+7XSkm5yMq0hZcT/d90rW9RaWvS68pOReg8Fr X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Back when per-vma locks were introduces, vm_lock was moved out of vm_area_struct in [1] because of the performance regression caused by false cacheline sharing. Recent investigation [2] revealed that the regressions is limited to a rather old Broadwell microarchitecture and even there it can be mitigated by disabling adjacent cacheline prefetching, see [3]. Splitting single logical structure into multiple ones leads to more complicated management, extra pointer dereferences and overall less maintainable code. When that split-away part is a lock, it complicates things even further. With no performance benefits, there are no reasons for this split. Merging the vm_lock back into vm_area_struct also allows vm_area_struct to use SLAB_TYPESAFE_BY_RCU later in this patchset. Move vm_lock back into vm_area_struct, aligning it at the cacheline boundary and changing the cache to be cacheline-aligned as well. With kernel compiled using defconfig, this causes VMA memory consumption to grow from 160 (vm_area_struct) + 40 (vm_lock) bytes to 256 bytes: slabinfo before: ... : ... vma_lock ... 40 102 1 : ... vm_area_struct ... 160 51 2 : ... slabinfo after moving vm_lock: ... : ... vm_area_struct ... 256 32 2 : ... Aggregate VMA memory consumption per 1000 VMAs grows from 50 to 64 pages, which is 5.5MB per 100000 VMAs. Note that the size of this structure is dependent on the kernel configuration and typically the original size is higher than 160 bytes. Therefore these calculations are close to the worst case scenario. A more realistic vm_area_struct usage before this change is: ... : ... vma_lock ... 40 102 1 : ... vm_area_struct ... 176 46 2 : ... Aggregate VMA memory consumption per 1000 VMAs grows from 54 to 64 pages, which is 3.9MB per 100000 VMAs. This memory consumption growth can be addressed later by optimizing the vm_lock. [1] https://lore.kernel.org/all/20230227173632.3292573-34-surenb@google.com/ [2] https://lore.kernel.org/all/ZsQyI%2F087V34JoIt@xsang-OptiPlex-9020/ [3] https://lore.kernel.org/all/CAJuCfpEisU8Lfe96AYJDZ+OM4NoPmnw9bP53cT_kbfP_pR+-2g@mail.gmail.com/ Signed-off-by: Suren Baghdasaryan Reviewed-by: Lorenzo Stoakes Reviewed-by: Shakeel Butt Reviewed-by: Vlastimil Babka Reviewed-by: Liam R. Howlett --- include/linux/mm.h | 28 ++++++++++-------- include/linux/mm_types.h | 6 ++-- kernel/fork.c | 49 ++++---------------------------- tools/testing/vma/vma_internal.h | 33 +++++---------------- 4 files changed, 32 insertions(+), 84 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index b040376ee81f..920e5ddd77cc 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -697,6 +697,12 @@ static inline void vma_numab_state_free(struct vm_area_struct *vma) {} #endif /* CONFIG_NUMA_BALANCING */ #ifdef CONFIG_PER_VMA_LOCK +static inline void vma_lock_init(struct vm_area_struct *vma) +{ + init_rwsem(&vma->vm_lock.lock); + vma->vm_lock_seq = UINT_MAX; +} + /* * Try to read-lock a vma. The function is allowed to occasionally yield false * locked result to avoid performance overhead, in which case we fall back to @@ -714,7 +720,7 @@ static inline bool vma_start_read(struct vm_area_struct *vma) if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(vma->vm_mm->mm_lock_seq.sequence)) return false; - if (unlikely(down_read_trylock(&vma->vm_lock->lock) == 0)) + if (unlikely(down_read_trylock(&vma->vm_lock.lock) == 0)) return false; /* @@ -729,7 +735,7 @@ static inline bool vma_start_read(struct vm_area_struct *vma) * This pairs with RELEASE semantics in vma_end_write_all(). */ if (unlikely(vma->vm_lock_seq == raw_read_seqcount(&vma->vm_mm->mm_lock_seq))) { - up_read(&vma->vm_lock->lock); + up_read(&vma->vm_lock.lock); return false; } return true; @@ -744,7 +750,7 @@ static inline bool vma_start_read(struct vm_area_struct *vma) static inline void vma_start_read_locked_nested(struct vm_area_struct *vma, int subclass) { mmap_assert_locked(vma->vm_mm); - down_read_nested(&vma->vm_lock->lock, subclass); + down_read_nested(&vma->vm_lock.lock, subclass); } /* @@ -756,13 +762,13 @@ static inline void vma_start_read_locked_nested(struct vm_area_struct *vma, int static inline void vma_start_read_locked(struct vm_area_struct *vma) { mmap_assert_locked(vma->vm_mm); - down_read(&vma->vm_lock->lock); + down_read(&vma->vm_lock.lock); } static inline void vma_end_read(struct vm_area_struct *vma) { rcu_read_lock(); /* keeps vma alive till the end of up_read */ - up_read(&vma->vm_lock->lock); + up_read(&vma->vm_lock.lock); rcu_read_unlock(); } @@ -791,7 +797,7 @@ static inline void vma_start_write(struct vm_area_struct *vma) if (__is_vma_write_locked(vma, &mm_lock_seq)) return; - down_write(&vma->vm_lock->lock); + down_write(&vma->vm_lock.lock); /* * We should use WRITE_ONCE() here because we can have concurrent reads * from the early lockless pessimistic check in vma_start_read(). @@ -799,7 +805,7 @@ static inline void vma_start_write(struct vm_area_struct *vma) * we should use WRITE_ONCE() for cleanliness and to keep KCSAN happy. */ WRITE_ONCE(vma->vm_lock_seq, mm_lock_seq); - up_write(&vma->vm_lock->lock); + up_write(&vma->vm_lock.lock); } static inline void vma_assert_write_locked(struct vm_area_struct *vma) @@ -811,7 +817,7 @@ static inline void vma_assert_write_locked(struct vm_area_struct *vma) static inline void vma_assert_locked(struct vm_area_struct *vma) { - if (!rwsem_is_locked(&vma->vm_lock->lock)) + if (!rwsem_is_locked(&vma->vm_lock.lock)) vma_assert_write_locked(vma); } @@ -844,6 +850,7 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, #else /* CONFIG_PER_VMA_LOCK */ +static inline void vma_lock_init(struct vm_area_struct *vma) {} static inline bool vma_start_read(struct vm_area_struct *vma) { return false; } static inline void vma_end_read(struct vm_area_struct *vma) {} @@ -878,10 +885,6 @@ static inline void assert_fault_locked(struct vm_fault *vmf) extern const struct vm_operations_struct vma_dummy_vm_ops; -/* - * WARNING: vma_init does not initialize vma->vm_lock. - * Use vm_area_alloc()/vm_area_free() if vma needs locking. - */ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) { memset(vma, 0, sizeof(*vma)); @@ -890,6 +893,7 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) INIT_LIST_HEAD(&vma->anon_vma_chain); vma_mark_detached(vma, false); vma_numab_state_init(vma); + vma_lock_init(vma); } /* Use when VMA is not part of the VMA tree and needs no locking */ diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 70dce20cbfd1..0ca63dee1902 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -738,8 +738,6 @@ struct vm_area_struct { * slowpath. */ unsigned int vm_lock_seq; - /* Unstable RCU readers are allowed to read this. */ - struct vma_lock *vm_lock; #endif /* @@ -792,6 +790,10 @@ struct vm_area_struct { struct vma_numab_state *numab_state; /* NUMA Balancing state */ #endif struct vm_userfaultfd_ctx vm_userfaultfd_ctx; +#ifdef CONFIG_PER_VMA_LOCK + /* Unstable RCU readers are allowed to read this. */ + struct vma_lock vm_lock ____cacheline_aligned_in_smp; +#endif } __randomize_layout; #ifdef CONFIG_NUMA diff --git a/kernel/fork.c b/kernel/fork.c index ded49f18cd95..40a8e615499f 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -436,35 +436,6 @@ static struct kmem_cache *vm_area_cachep; /* SLAB cache for mm_struct structures (tsk->mm) */ static struct kmem_cache *mm_cachep; -#ifdef CONFIG_PER_VMA_LOCK - -/* SLAB cache for vm_area_struct.lock */ -static struct kmem_cache *vma_lock_cachep; - -static bool vma_lock_alloc(struct vm_area_struct *vma) -{ - vma->vm_lock = kmem_cache_alloc(vma_lock_cachep, GFP_KERNEL); - if (!vma->vm_lock) - return false; - - init_rwsem(&vma->vm_lock->lock); - vma->vm_lock_seq = UINT_MAX; - - return true; -} - -static inline void vma_lock_free(struct vm_area_struct *vma) -{ - kmem_cache_free(vma_lock_cachep, vma->vm_lock); -} - -#else /* CONFIG_PER_VMA_LOCK */ - -static inline bool vma_lock_alloc(struct vm_area_struct *vma) { return true; } -static inline void vma_lock_free(struct vm_area_struct *vma) {} - -#endif /* CONFIG_PER_VMA_LOCK */ - struct vm_area_struct *vm_area_alloc(struct mm_struct *mm) { struct vm_area_struct *vma; @@ -474,10 +445,6 @@ struct vm_area_struct *vm_area_alloc(struct mm_struct *mm) return NULL; vma_init(vma, mm); - if (!vma_lock_alloc(vma)) { - kmem_cache_free(vm_area_cachep, vma); - return NULL; - } return vma; } @@ -496,10 +463,7 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) * will be reinitialized. */ data_race(memcpy(new, orig, sizeof(*new))); - if (!vma_lock_alloc(new)) { - kmem_cache_free(vm_area_cachep, new); - return NULL; - } + vma_lock_init(new); INIT_LIST_HEAD(&new->anon_vma_chain); vma_numab_state_init(new); dup_anon_vma_name(orig, new); @@ -511,7 +475,6 @@ void __vm_area_free(struct vm_area_struct *vma) { vma_numab_state_free(vma); free_anon_vma_name(vma); - vma_lock_free(vma); kmem_cache_free(vm_area_cachep, vma); } @@ -522,7 +485,7 @@ static void vm_area_free_rcu_cb(struct rcu_head *head) vm_rcu); /* The vma should not be locked while being destroyed. */ - VM_BUG_ON_VMA(rwsem_is_locked(&vma->vm_lock->lock), vma); + VM_BUG_ON_VMA(rwsem_is_locked(&vma->vm_lock.lock), vma); __vm_area_free(vma); } #endif @@ -3188,11 +3151,9 @@ void __init proc_caches_init(void) sizeof(struct fs_struct), 0, SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT, NULL); - - vm_area_cachep = KMEM_CACHE(vm_area_struct, SLAB_PANIC|SLAB_ACCOUNT); -#ifdef CONFIG_PER_VMA_LOCK - vma_lock_cachep = KMEM_CACHE(vma_lock, SLAB_PANIC|SLAB_ACCOUNT); -#endif + vm_area_cachep = KMEM_CACHE(vm_area_struct, + SLAB_HWCACHE_ALIGN|SLAB_NO_MERGE|SLAB_PANIC| + SLAB_ACCOUNT); mmap_init(); nsproxy_cache_init(); } diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_internal.h index 2404347fa2c7..96aeb28c81f9 100644 --- a/tools/testing/vma/vma_internal.h +++ b/tools/testing/vma/vma_internal.h @@ -274,10 +274,10 @@ struct vm_area_struct { /* * Can only be written (using WRITE_ONCE()) while holding both: * - mmap_lock (in write mode) - * - vm_lock->lock (in write mode) + * - vm_lock.lock (in write mode) * Can be read reliably while holding one of: * - mmap_lock (in read or write mode) - * - vm_lock->lock (in read or write mode) + * - vm_lock.lock (in read or write mode) * Can be read unreliably (using READ_ONCE()) for pessimistic bailout * while holding nothing (except RCU to keep the VMA struct allocated). * @@ -286,7 +286,7 @@ struct vm_area_struct { * slowpath. */ unsigned int vm_lock_seq; - struct vma_lock *vm_lock; + struct vma_lock vm_lock; #endif /* @@ -463,17 +463,10 @@ static inline struct vm_area_struct *vma_next(struct vma_iterator *vmi) return mas_find(&vmi->mas, ULONG_MAX); } -static inline bool vma_lock_alloc(struct vm_area_struct *vma) +static inline void vma_lock_init(struct vm_area_struct *vma) { - vma->vm_lock = calloc(1, sizeof(struct vma_lock)); - - if (!vma->vm_lock) - return false; - - init_rwsem(&vma->vm_lock->lock); + init_rwsem(&vma->vm_lock.lock); vma->vm_lock_seq = UINT_MAX; - - return true; } static inline void vma_assert_write_locked(struct vm_area_struct *); @@ -496,6 +489,7 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) vma->vm_ops = &vma_dummy_vm_ops; INIT_LIST_HEAD(&vma->anon_vma_chain); vma_mark_detached(vma, false); + vma_lock_init(vma); } static inline struct vm_area_struct *vm_area_alloc(struct mm_struct *mm) @@ -506,10 +500,6 @@ static inline struct vm_area_struct *vm_area_alloc(struct mm_struct *mm) return NULL; vma_init(vma, mm); - if (!vma_lock_alloc(vma)) { - free(vma); - return NULL; - } return vma; } @@ -522,10 +512,7 @@ static inline struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) return NULL; memcpy(new, orig, sizeof(*new)); - if (!vma_lock_alloc(new)) { - free(new); - return NULL; - } + vma_lock_init(new); INIT_LIST_HEAD(&new->anon_vma_chain); return new; @@ -695,14 +682,8 @@ static inline void mpol_put(struct mempolicy *) { } -static inline void vma_lock_free(struct vm_area_struct *vma) -{ - free(vma->vm_lock); -} - static inline void __vm_area_free(struct vm_area_struct *vma) { - vma_lock_free(vma); free(vma); } From patchwork Thu Jan 9 02:30:12 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13931810 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1250BE7719A for ; Thu, 9 Jan 2025 02:30:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1DE776B0093; Wed, 8 Jan 2025 21:30:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 165766B0096; Wed, 8 Jan 2025 21:30:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E84826B0098; Wed, 8 Jan 2025 21:30:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id BBCD36B0093 for ; Wed, 8 Jan 2025 21:30:38 -0500 (EST) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 5FF84121702 for ; Thu, 9 Jan 2025 02:30:38 +0000 (UTC) X-FDA: 82986334956.24.9EDCEEE Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) by imf03.hostedemail.com (Postfix) with ESMTP id 8361920007 for ; Thu, 9 Jan 2025 02:30:36 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=NJ6QhQqT; spf=pass (imf03.hostedemail.com: domain of 3yzR_ZwYKCHQkmjWfTYggYdW.Ugedafmp-eecnSUc.gjY@flex--surenb.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=3yzR_ZwYKCHQkmjWfTYggYdW.Ugedafmp-eecnSUc.gjY@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736389836; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=mca0wUeRB8q4/QThhp2wCRFx+30mwBcyzZfjxbePC2g=; b=lLPViirS1/t4XGrlyHyHT+QN7rhgKuFwPyFafwG6m7NS8FYmQEmhKDDGNbYfP2/rWrbgE6 C0nVgOEyzZB7DvJ2kw15zoJq+79vHD3O/cojS7Lh2Cd2V0AT1aBueoUYCKlfzF+bd/HKUk Bzdcen0sid2DJWxaE2GyM7RFHsDEL2c= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736389836; a=rsa-sha256; cv=none; b=sHCO7qhRQ6cxkTRtc/DeBHeXp2YAs/3AdwMXX2tCGp7gpTPlPCX79NzAK+UsQYtt8aRpU7 RhhwCleDvdvk9iauUQB7xml4i3KvKaN9sbxmdf+5y6W2rHkN4sZ2dwPx4EKdYMIzKkFqd3 amCa1HfN9CmpaRX9Pb+J4BdNoX/iD7E= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=NJ6QhQqT; spf=pass (imf03.hostedemail.com: domain of 3yzR_ZwYKCHQkmjWfTYggYdW.Ugedafmp-eecnSUc.gjY@flex--surenb.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=3yzR_ZwYKCHQkmjWfTYggYdW.Ugedafmp-eecnSUc.gjY@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2ee6dccd3c9so748641a91.3 for ; Wed, 08 Jan 2025 18:30:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736389835; x=1736994635; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=mca0wUeRB8q4/QThhp2wCRFx+30mwBcyzZfjxbePC2g=; b=NJ6QhQqTE5u0msGn5IO73l9+FdBGEa0kXWsp4lVpXx0+d/7RtHRt+jRNPbbBkyme2O Kb74AgEkIblDLNio+r49XpAv3bq6r6+1tc3vFtK4izhh6qyV9yDiCBXf2YPBz/0E43s0 rGDiZaqT+UhZKOUUIgfBKvAlsPYfYEW4vR6JLSa67Ufe0v/t77RSGMpUyJGeTNFUdAcb 27swQ1Q2QGCHjLrPvsyt0aiUtJmrPiLNp1uydXdACw8yMuddmvlCvZAPuvavQ9R99+CL rjtYTi/ZaEdEh/QpSoeZ/mMOzYlDU9XwCvJGIspKDHC9t9SP6oWAO2LNbAMx3vO2zB39 djtA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736389835; x=1736994635; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=mca0wUeRB8q4/QThhp2wCRFx+30mwBcyzZfjxbePC2g=; b=xOEaGnKs6mYO6/+S3ixxCaukdAjmkt/pCOQVp8/SYW60ikTvSLsD8t2eQjSvJ0+u58 RXutXpxH60H7J7iPHpdfcpqdiZdRLefrBYVSmtZYXF/Tf1SSPx+AeBENayJYcnzpqk3R fkG71cdPysAv6L9RM1i6XizUrsKRDZXTKpQ9BV0uAOfYc7KjnMmxK/6UaEt7N88Do2Pq mZho5/yiqjtScvwu7/QEwttwtv7C7FpFSHtGLXj2Iw8zW8U7a3NHd3vY0GsCgHPkQKB/ gGzH9ELJWmy139Bg9wJWxKv3LVyXmEfihLOKtxGJBoD0fpoPy5x8U8usH00gIEPRJVzn CFaw== X-Forwarded-Encrypted: i=1; AJvYcCVixItwgXzWjJjATBdJ8mx9l9h+WGaidq7aGF+DIZK5b1ijrxpcsBRSZ2x8KwE7fp338e8s/cIVJQ==@kvack.org X-Gm-Message-State: AOJu0Yx0RuhTHqqCGB4Aw0DPTf5KMZi5x+6k7AZKFCwMpqZxq9EC3ptP +Nx2uW8OFRWJss5JpyaRzDvy79/Yy92/uH/zNxiJwS6/WB5WjDjoNuzlUi8EQFeFQ8AR3j6ax5d oag== X-Google-Smtp-Source: AGHT+IEIh41j0AtPPhIvxBJK6ISgVLxnQVreV/6d3lvpOJLUFoaWqoL/DFcaBwvey98OKSYtXZrpcJIDmp8= X-Received: from pjbsi3.prod.google.com ([2002:a17:90b:5283:b0:2ea:6b84:3849]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:4c4f:b0:2ee:ee77:227c with SMTP id 98e67ed59e1d1-2f548f102d1mr7321068a91.3.1736389835294; Wed, 08 Jan 2025 18:30:35 -0800 (PST) Date: Wed, 8 Jan 2025 18:30:12 -0800 In-Reply-To: <20250109023025.2242447-1-surenb@google.com> Mime-Version: 1.0 References: <20250109023025.2242447-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109023025.2242447-4-surenb@google.com> Subject: [PATCH v8 03/16] mm: mark vma as detached until it's added into vma tree From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com, "Liam R. Howlett" X-Rspamd-Queue-Id: 8361920007 X-Rspam-User: X-Rspamd-Server: rspam07 X-Stat-Signature: gekjf871okao94om9hffrtf1wuhkjuek X-HE-Tag: 1736389836-358687 X-HE-Meta: U2FsdGVkX19UYUJdR8sPcTAWGaiePzaBh/0E1bsPv5G19HB0A/m8QTFgaf+yfoZDdelMbXkZCTRZZrG8+ju/TXukk4NrAB3TAwat6msNC7mNDWpl40DhpI7vTHbuC+lMgtoSLnTLaFwTbNQoI/gK6IboxuyNpnMCuju28zc4C8rqggu21Wmrd8F6RTV0YlknAReneoG9//bHOnNWcS6JXmUMA29cVBlJCo5LO8gl41uxmgLrdiOfMQ+fc/t/BNbnLYMUtrPRhrcxNkWDlvYMQDdmNPOkaAzRhhYk35huZbD42rvROcB6fz2HaXNQzRnxhj3YRXjMR+gLyKDXbwAPkxXfTbVm321lQUACAZz867fvqWx1iYxFt+1TInsnD9ZEogvfggaBsqZuIPw0Nc87GDsFLkX+o+sAvl9ew+Ao40n/R2286bd+ihrVWeb8FTl1XbUuaHt2BhT17Sg4P7F/zhfB5OIuPFzTX3Fw9wwNAtvSqo3Z4+8KDuKKuA8jYQH8c0ndR+Ywk1QVUC42IKpk8h0V7rQtzfZqyWlFL10DRDXU1AYh4TmCmzqwmO5cLaeQ7tEAUVyBucYgDulxibvdXNg3uKLKGuEI6P5H7RIrb+VMt2ZHlZCnu9jpLEtpGCUmL+y99eYLnujtDO3dnJDUWSGBM41F2Sr7S6OpJj7QrCX7XBF1ysTX+0Oy2Z+p+Gp97NxmEStZKLWiEfA4n91KToYLfwL5Rwdlbc2Z2CwBy4bVuCZetvufFrEgSI5zdqDNrfsHGD0GubbFFMolKqp62UfuV6lGTqQvR1cmUiV+lGpotkmx9etM1Wzra3n3wgQR1tkz1GEXEEqFU+RmS1wftZ3wV1QHZiz9WGNT7R0C6FE/tXXm/KXoTdXU6Mpx761P4Jgeb+4usRIbafUIE8aQR4eSv0lkvNe/Xk6V1nI1I5xgzWQbbeaI+mfVVSpnUTDduMxHbNuaEHie4bQ7ilG l2kyeb0W o1EBOuG4Brz57FSF2eLF6dMOUSFQBotLhwAfsY7AaYp7zyTGqodNUe0OJk/079HgChopB3DDMsSCzga/HLkUvGQo3f0rw/+Bbk6hnYHl/6qSNjgDhYfV+cHWzNarT4WHh34PWpBctrq4C2cYhJsKwGaTPTrTjXGDtfy6JhG6+4xc4QXZlAriL5ko6GxaFSIIqSRUpMIgjTB9+U/fanJ73+zhFblSOsuurCIPlWpnrdtoGgjf2wbR3ziXeTtxK1hKfeepjNkR+n8xo2T61c4yr6nsblI3bWqq/6zx5/C8K3mqwLlHA1r8CD/jlg8rorqllZFI0ZXTFTNeub/XSvwVaUr0p8fUjeFq67nTcIHPrHvo1FbhuFxg1WKOiiaVoAy68GydR9wu/wmcPCEdpjdrArESGWqJ4Vpynb+FO9AjUAaqicCviLlcERYLR0FeCkn0hklNLOv3s0VbTCm5btE0SRWDLia/NeAuaGj3NZ8NGR/KdaA8XZsTvMGTqha85d9QxtxAe+SI9ZVgEmg8YM3Q7FjS1T0Bn2CmDq5eeue8c0pH0gOBUVrtvRjBwlQytyV8wYm+F3lb8o6B5Czu0U4VV9XD74+VhTdywLbfNsyfd78dZUA0Bvca1d14Y8K2qjavE2eNXCBrKNx42Yj8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Current implementation does not set detached flag when a VMA is first allocated. This does not represent the real state of the VMA, which is detached until it is added into mm's VMA tree. Fix this by marking new VMAs as detached and resetting detached flag only after VMA is added into a tree. Introduce vma_mark_attached() to make the API more readable and to simplify possible future cleanup when vma->vm_mm might be used to indicate detached vma and vma_mark_attached() will need an additional mm parameter. Signed-off-by: Suren Baghdasaryan Reviewed-by: Shakeel Butt Reviewed-by: Lorenzo Stoakes Reviewed-by: Vlastimil Babka Reviewed-by: Liam R. Howlett --- include/linux/mm.h | 27 ++++++++++++++++++++------- kernel/fork.c | 4 ++++ mm/memory.c | 2 +- mm/vma.c | 6 +++--- mm/vma.h | 2 ++ tools/testing/vma/vma_internal.h | 17 ++++++++++++----- 6 files changed, 42 insertions(+), 16 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 920e5ddd77cc..a9d8dd5745f7 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -821,12 +821,21 @@ static inline void vma_assert_locked(struct vm_area_struct *vma) vma_assert_write_locked(vma); } -static inline void vma_mark_detached(struct vm_area_struct *vma, bool detached) +static inline void vma_mark_attached(struct vm_area_struct *vma) +{ + vma->detached = false; +} + +static inline void vma_mark_detached(struct vm_area_struct *vma) { /* When detaching vma should be write-locked */ - if (detached) - vma_assert_write_locked(vma); - vma->detached = detached; + vma_assert_write_locked(vma); + vma->detached = true; +} + +static inline bool is_vma_detached(struct vm_area_struct *vma) +{ + return vma->detached; } static inline void release_fault_lock(struct vm_fault *vmf) @@ -857,8 +866,8 @@ static inline void vma_end_read(struct vm_area_struct *vma) {} static inline void vma_start_write(struct vm_area_struct *vma) {} static inline void vma_assert_write_locked(struct vm_area_struct *vma) { mmap_assert_write_locked(vma->vm_mm); } -static inline void vma_mark_detached(struct vm_area_struct *vma, - bool detached) {} +static inline void vma_mark_attached(struct vm_area_struct *vma) {} +static inline void vma_mark_detached(struct vm_area_struct *vma) {} static inline struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, unsigned long address) @@ -891,7 +900,10 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) vma->vm_mm = mm; vma->vm_ops = &vma_dummy_vm_ops; INIT_LIST_HEAD(&vma->anon_vma_chain); - vma_mark_detached(vma, false); +#ifdef CONFIG_PER_VMA_LOCK + /* vma is not locked, can't use vma_mark_detached() */ + vma->detached = true; +#endif vma_numab_state_init(vma); vma_lock_init(vma); } @@ -1086,6 +1098,7 @@ static inline int vma_iter_bulk_store(struct vma_iterator *vmi, if (unlikely(mas_is_err(&vmi->mas))) return -ENOMEM; + vma_mark_attached(vma); return 0; } diff --git a/kernel/fork.c b/kernel/fork.c index 40a8e615499f..f2f9e7b427ad 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -465,6 +465,10 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) data_race(memcpy(new, orig, sizeof(*new))); vma_lock_init(new); INIT_LIST_HEAD(&new->anon_vma_chain); +#ifdef CONFIG_PER_VMA_LOCK + /* vma is not locked, can't use vma_mark_detached() */ + new->detached = true; +#endif vma_numab_state_init(new); dup_anon_vma_name(orig, new); diff --git a/mm/memory.c b/mm/memory.c index 1342d451b1bd..105b99064ce5 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -6391,7 +6391,7 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, goto inval; /* Check if the VMA got isolated after we found it */ - if (vma->detached) { + if (is_vma_detached(vma)) { vma_end_read(vma); count_vm_vma_lock_event(VMA_LOCK_MISS); /* The area was replaced with another one */ diff --git a/mm/vma.c b/mm/vma.c index af1d549b179c..d603494e69d7 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -327,7 +327,7 @@ static void vma_complete(struct vma_prepare *vp, struct vma_iterator *vmi, if (vp->remove) { again: - vma_mark_detached(vp->remove, true); + vma_mark_detached(vp->remove); if (vp->file) { uprobe_munmap(vp->remove, vp->remove->vm_start, vp->remove->vm_end); @@ -1221,7 +1221,7 @@ static void reattach_vmas(struct ma_state *mas_detach) mas_set(mas_detach, 0); mas_for_each(mas_detach, vma, ULONG_MAX) - vma_mark_detached(vma, false); + vma_mark_attached(vma); __mt_destroy(mas_detach->tree); } @@ -1296,7 +1296,7 @@ static int vms_gather_munmap_vmas(struct vma_munmap_struct *vms, if (error) goto munmap_gather_failed; - vma_mark_detached(next, true); + vma_mark_detached(next); nrpages = vma_pages(next); vms->nr_pages += nrpages; diff --git a/mm/vma.h b/mm/vma.h index a2e8710b8c47..2a2668de8d2c 100644 --- a/mm/vma.h +++ b/mm/vma.h @@ -157,6 +157,7 @@ static inline int vma_iter_store_gfp(struct vma_iterator *vmi, if (unlikely(mas_is_err(&vmi->mas))) return -ENOMEM; + vma_mark_attached(vma); return 0; } @@ -389,6 +390,7 @@ static inline void vma_iter_store(struct vma_iterator *vmi, __mas_set_range(&vmi->mas, vma->vm_start, vma->vm_end - 1); mas_store_prealloc(&vmi->mas, vma); + vma_mark_attached(vma); } static inline unsigned long vma_iter_addr(struct vma_iterator *vmi) diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_internal.h index 96aeb28c81f9..47c8b03ffbbd 100644 --- a/tools/testing/vma/vma_internal.h +++ b/tools/testing/vma/vma_internal.h @@ -469,13 +469,17 @@ static inline void vma_lock_init(struct vm_area_struct *vma) vma->vm_lock_seq = UINT_MAX; } +static inline void vma_mark_attached(struct vm_area_struct *vma) +{ + vma->detached = false; +} + static inline void vma_assert_write_locked(struct vm_area_struct *); -static inline void vma_mark_detached(struct vm_area_struct *vma, bool detached) +static inline void vma_mark_detached(struct vm_area_struct *vma) { /* When detaching vma should be write-locked */ - if (detached) - vma_assert_write_locked(vma); - vma->detached = detached; + vma_assert_write_locked(vma); + vma->detached = true; } extern const struct vm_operations_struct vma_dummy_vm_ops; @@ -488,7 +492,8 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) vma->vm_mm = mm; vma->vm_ops = &vma_dummy_vm_ops; INIT_LIST_HEAD(&vma->anon_vma_chain); - vma_mark_detached(vma, false); + /* vma is not locked, can't use vma_mark_detached() */ + vma->detached = true; vma_lock_init(vma); } @@ -514,6 +519,8 @@ static inline struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) memcpy(new, orig, sizeof(*new)); vma_lock_init(new); INIT_LIST_HEAD(&new->anon_vma_chain); + /* vma is not locked, can't use vma_mark_detached() */ + new->detached = true; return new; } From patchwork Thu Jan 9 02:30:13 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13931811 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id ACE7BE77188 for ; Thu, 9 Jan 2025 02:30:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4C3576B0098; Wed, 8 Jan 2025 21:30:41 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 44C536B0099; Wed, 8 Jan 2025 21:30:41 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 24FAD6B009A; Wed, 8 Jan 2025 21:30:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id F35E86B0098 for ; Wed, 8 Jan 2025 21:30:40 -0500 (EST) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id A8470140B1D for ; Thu, 9 Jan 2025 02:30:40 +0000 (UTC) X-FDA: 82986335040.24.0BDC8EE Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) by imf02.hostedemail.com (Postfix) with ESMTP id DD86D80011 for ; Thu, 9 Jan 2025 02:30:38 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=UaY6A9bn; spf=pass (imf02.hostedemail.com: domain of 3zTR_ZwYKCHYmolYhVaiiafY.Wigfchor-ggepUWe.ila@flex--surenb.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=3zTR_ZwYKCHYmolYhVaiiafY.Wigfchor-ggepUWe.ila@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736389838; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=tU8qdV00qxcThV6AIuoPhomYC//sTigTQ5m8fxahbLA=; b=YT3CZsrGyqm+D8aGvhLhZg+wgDBb8+MQZ3jCDGv+HA7PFX3wonsWvq+KIheAY3rpbs8MFN yGHjJqy1227lJn6y1Ace9/L8oCf/79gxbdF7bZnaJWs/8vikc2LtOAWdR+5Jgujn3pK6Fj D3lCdCazQTU85BoR+OofUk+08tWkSWg= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=UaY6A9bn; spf=pass (imf02.hostedemail.com: domain of 3zTR_ZwYKCHYmolYhVaiiafY.Wigfchor-ggepUWe.ila@flex--surenb.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=3zTR_ZwYKCHYmolYhVaiiafY.Wigfchor-ggepUWe.ila@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736389839; a=rsa-sha256; cv=none; b=DNUFqrtap4rMXDSl129hO4SU9EPYV2Pm3kFQ0PfQGMIKEZx8Hq5IYXlxY/xJjK8WDUhinX XvYOpDsKtMjytMNfaQU910KVvLbaA5QAv/L35eH/s0GLXXhtVphwp+p2nP1K5vWpWnURm7 oqPZpyGATDYuLFq0veGnXcp4VzVz8sI= Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2ee9f66cb12so823950a91.1 for ; Wed, 08 Jan 2025 18:30:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736389838; x=1736994638; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=tU8qdV00qxcThV6AIuoPhomYC//sTigTQ5m8fxahbLA=; b=UaY6A9bn8OlunaGtEOwgoVcSoYkfovle1JNCYLp0h83bQ5DNunYxYLoucmw9/BouYF c3A0oDiWYj/npyygYbq4SkVPP160v6yOeyhA0cvHShxm4s/l/AgXD9ihoY9ulBOU3xRY z55t/kurmp58wtOdaejEdcv8ikwg4o6C0crHKIKA1rvHpZj2oLLia50N9hDKWX98NSSI 7IBoflPdqSAyIPCN63bmiITBcew/uUkYl8JZA5UUJaXjjWEJ1ZKHgcCUEgFbifAiDoEy lcoMeudFwqKTgLQ+g+zz/XPocb983bb1fMKdekTbUX4lp/s9q1xs+IMdXjCpv3WpMfiB goPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736389838; x=1736994638; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=tU8qdV00qxcThV6AIuoPhomYC//sTigTQ5m8fxahbLA=; b=D53Aa/9dVfB4gVxw165oMg1R4uVjjzMRN4PUm1J1vBEAyyINvcuIcyqWmC9noUQXlT zmlKuB+2BhEfIiAgkhxNGUqRFCveSEx8gALfIaO9U64ga8K0nLymNUZF+1+X5L+e9GMQ rO0J+U35mct2vrVNVYyNF+Gs15r0oXhyCqPn6pKN/Z8aSG0jLUXUx8u4SUSb1dLzkaFM Tz2IJ32iWSPylhgLVLmJ21W4FDQ/XUUJcRfSbJCBa6bZYGKF4X5UH3bVpSh+WvBc2eNT xaRHK9EdHqeak/Ndr9rJ8WDuBUiKyWo4V1RSD4p3SVALlYJOp+8SlTTiFqgfS4yAdvuX lOXA== X-Forwarded-Encrypted: i=1; AJvYcCWRmDdX86W92bFIB0A0NnMNRbijpn7u1ILo6Poxw3gQ0/EQ7m/2Ie3ku/wBOUo2SR5zFH3CFln0Ww==@kvack.org X-Gm-Message-State: AOJu0Yy47dz30WovcB8DHpm7SxV0bCfb/Yljv9Hw8pRwTchSe7zu7o73 lounHOVbuWmU0glaUTy/GFpGMrdByTFeY04ZNJF+FvGWJ/n+y07YdMfWTInl+tYMx+5jferKR8r nSg== X-Google-Smtp-Source: AGHT+IHe7fElKYNkHiMi+Ss9Fbmy8ZuO1qg0Iy4wm9l48QbX7o9iXLM3T+s6E3Ey0ObPvizJE76ByK8fFls= X-Received: from pjbsk16.prod.google.com ([2002:a17:90b:2dd0:b0:2ee:4f3a:d07d]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90a:dfcb:b0:2ee:863e:9ff6 with SMTP id 98e67ed59e1d1-2f548eae2ecmr8238330a91.16.1736389837604; Wed, 08 Jan 2025 18:30:37 -0800 (PST) Date: Wed, 8 Jan 2025 18:30:13 -0800 In-Reply-To: <20250109023025.2242447-1-surenb@google.com> Mime-Version: 1.0 References: <20250109023025.2242447-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109023025.2242447-5-surenb@google.com> Subject: [PATCH v8 04/16] mm: introduce vma_iter_store_attached() to use with attached vmas From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Rspamd-Queue-Id: DD86D80011 X-Rspamd-Server: rspam12 X-Stat-Signature: te4t565ixczqrqpxg96u188ktdmj6m1y X-Rspam-User: X-HE-Tag: 1736389838-490478 X-HE-Meta: U2FsdGVkX19uxg8zrkyW14UAdYn1P9NY0X1sfE9Eszof6S/IucKI0vIBrK4C2ujL/LBtS8mcCf+YZy2r3sGNXkQBRH4i/mFetXRY8p/9Ol0jcb+5zyvrBkLtCgQrEHxlOvNqqV4FXXx9mDYc7ijH7r74oyA92yPFrzS2c+fvUakxdmBz1i0jdAo8A+Uy7LdHeX+yNbrmYNYPbPSyHojbxXW8dJlq437AnJo8GhigyPA5NSNVfJ91XCLOADnKK3nJmGkRY2N5hf7DhDOvb1VrErF0HpzIzfr8G+w3Yo4YPE6q4XtM4XThUagYGrrr4SApvmyuLukQgFEVxFNIF7qZR9rrtMDdyU+L37BOzANpexZkCAw6UaqCdDtHRsKRZq0QT2g6VjBKylW6++IHSI3iW2WR/nIL4aI8ZNV/lJCcdCOb8KAIF7QY+lzSl0EPpwPUoFafjoO4A31wZKoc/iTdZX3Z9B+Yw1DXxKaoVEHjwb2ifRDiP2DnU9zuDILuGXP/lMzMHaVzrSfiasv8jC9icNkrwvF47F9dzNNT8jWGVrYTMp8L+RQfJWySiE5XuuLiAvo8wUXe5Cv/pKQ8G8SBLWcHkYrdB5Ulo+jsBhtEaWaMeezaC5N16rtpEX0QGygbeXDQsxIZ5oKaTZNkiMdD4khihBKTq2qv1aFZ/BtTXdpe4kJd35QgGo65RnTMMsXMD4QgLuGdbb4QlxOHjRYy19zpeNDsJSnVYiLXtE+lGDkOCS6u/XA4OslzmE/34sALQTVgZZDPWZIoDw8NQLbq5tQ116zcdumy3Yw9o1SHBT9GdS1Z6qSI1IeMgeQ1WAA2W1Spz+p+1JFoK8bO+8hRAAMMhJcl4qCEJCFS3M3ul5oVQAMP3iJAHNm5jgz2HDL+ynmD29MYHrVy1deRDMQ7hQJm+CDJISiUyFdeOy3hdno/YhP7JDM7cKytQxhCZ78HEr8wmKQv89O5s3tJThR 5pdOXkpx MrL/LVz59WAiUQOAm9hmoFpoozYHMIF6tvXD6/jq6IdP+NlRuh97lLbODoIoajK51qipjtxWUBBny+SoDMdigJmwXmw6HXz4ptmESbN84/UXA/xltTfLdRUNQ8HYSNhXkJ0CE5JxNAyU2bBs8QZcZc1zuJXwN97LoVTlOlR12YMrS9eljNJJXGIuvA7hCVZTaEjdEG8XogQ3tW7IXiIjJgeD5DL8B5eaoy3PaCL5vtUwPEnuDuloUC5FfLPCi39k4GqMCSZY9hzVXTOGec1rmFE2jDMq1buniwKk3rO+ZcoqQkbb2GaS0bHP+73I/aXviCTrvpnIj+9pO+iuh5d/aEBX7fTimtPDRPY1OROgrBdu0462R4D125meXoosJq7e9GKG4Ga3aENeRxm02ss2vJWzNpQIoOOvypCt/xMxVmIdTkLH6CCYZkcbAwCpcGubV2pTmiBJYAEbK1JbHxmuFp1CwL/Tfol9krLLhwpGJyiHV0oen5pQ1m6486svep0SJRj1NGfqAbNJefYdslCciF7KN9G529ts0GQW59HVWkCskWlRQnKl7kcZnyB19+qcuzr10vCS6TGv+LvyvXfceJ4PblW8ibS4V3Qbenal6DoaRchd9iem8FZ3R3Db5UAMcI207RI9rbGsxb8PFcJCYjXNNblDXMe2v1lZX X-Bogosity: Ham, tests=bogofilter, spamicity=0.000064, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: vma_iter_store() functions can be used both when adding a new vma and when updating an existing one. However for existing ones we do not need to mark them attached as they are already marked that way. Introduce vma_iter_store_attached() to be used with already attached vmas. Signed-off-by: Suren Baghdasaryan Reviewed-by: Vlastimil Babka --- include/linux/mm.h | 12 ++++++++++++ mm/vma.c | 8 ++++---- mm/vma.h | 11 +++++++++-- 3 files changed, 25 insertions(+), 6 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index a9d8dd5745f7..e0d403c1ff63 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -821,6 +821,16 @@ static inline void vma_assert_locked(struct vm_area_struct *vma) vma_assert_write_locked(vma); } +static inline void vma_assert_attached(struct vm_area_struct *vma) +{ + VM_BUG_ON_VMA(vma->detached, vma); +} + +static inline void vma_assert_detached(struct vm_area_struct *vma) +{ + VM_BUG_ON_VMA(!vma->detached, vma); +} + static inline void vma_mark_attached(struct vm_area_struct *vma) { vma->detached = false; @@ -866,6 +876,8 @@ static inline void vma_end_read(struct vm_area_struct *vma) {} static inline void vma_start_write(struct vm_area_struct *vma) {} static inline void vma_assert_write_locked(struct vm_area_struct *vma) { mmap_assert_write_locked(vma->vm_mm); } +static inline void vma_assert_attached(struct vm_area_struct *vma) {} +static inline void vma_assert_detached(struct vm_area_struct *vma) {} static inline void vma_mark_attached(struct vm_area_struct *vma) {} static inline void vma_mark_detached(struct vm_area_struct *vma) {} diff --git a/mm/vma.c b/mm/vma.c index d603494e69d7..b9cf552e120c 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -660,14 +660,14 @@ static int commit_merge(struct vma_merge_struct *vmg, vma_set_range(vmg->vma, vmg->start, vmg->end, vmg->pgoff); if (expanded) - vma_iter_store(vmg->vmi, vmg->vma); + vma_iter_store_attached(vmg->vmi, vmg->vma); if (adj_start) { adjust->vm_start += adj_start; adjust->vm_pgoff += PHYS_PFN(adj_start); if (adj_start < 0) { WARN_ON(expanded); - vma_iter_store(vmg->vmi, adjust); + vma_iter_store_attached(vmg->vmi, adjust); } } @@ -2845,7 +2845,7 @@ int expand_upwards(struct vm_area_struct *vma, unsigned long address) anon_vma_interval_tree_pre_update_vma(vma); vma->vm_end = address; /* Overwrite old entry in mtree. */ - vma_iter_store(&vmi, vma); + vma_iter_store_attached(&vmi, vma); anon_vma_interval_tree_post_update_vma(vma); perf_event_mmap(vma); @@ -2925,7 +2925,7 @@ int expand_downwards(struct vm_area_struct *vma, unsigned long address) vma->vm_start = address; vma->vm_pgoff -= grow; /* Overwrite old entry in mtree. */ - vma_iter_store(&vmi, vma); + vma_iter_store_attached(&vmi, vma); anon_vma_interval_tree_post_update_vma(vma); perf_event_mmap(vma); diff --git a/mm/vma.h b/mm/vma.h index 2a2668de8d2c..63dd38d5230c 100644 --- a/mm/vma.h +++ b/mm/vma.h @@ -365,9 +365,10 @@ static inline struct vm_area_struct *vma_iter_load(struct vma_iterator *vmi) } /* Store a VMA with preallocated memory */ -static inline void vma_iter_store(struct vma_iterator *vmi, - struct vm_area_struct *vma) +static inline void vma_iter_store_attached(struct vma_iterator *vmi, + struct vm_area_struct *vma) { + vma_assert_attached(vma); #if defined(CONFIG_DEBUG_VM_MAPLE_TREE) if (MAS_WARN_ON(&vmi->mas, vmi->mas.status != ma_start && @@ -390,7 +391,13 @@ static inline void vma_iter_store(struct vma_iterator *vmi, __mas_set_range(&vmi->mas, vma->vm_start, vma->vm_end - 1); mas_store_prealloc(&vmi->mas, vma); +} + +static inline void vma_iter_store(struct vma_iterator *vmi, + struct vm_area_struct *vma) +{ vma_mark_attached(vma); + vma_iter_store_attached(vmi, vma); } static inline unsigned long vma_iter_addr(struct vma_iterator *vmi) From patchwork Thu Jan 9 02:30:14 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13931812 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54CE1E77199 for ; Thu, 9 Jan 2025 02:30:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9C9036B0099; Wed, 8 Jan 2025 21:30:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 952096B009A; Wed, 8 Jan 2025 21:30:43 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 70B5E6B009B; Wed, 8 Jan 2025 21:30:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 49DFB6B0099 for ; Wed, 8 Jan 2025 21:30:43 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id B7F38C15E8 for ; Thu, 9 Jan 2025 02:30:42 +0000 (UTC) X-FDA: 82986335124.23.BA27F5B Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) by imf21.hostedemail.com (Postfix) with ESMTP id E94961C0009 for ; Thu, 9 Jan 2025 02:30:40 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="LsbDi/aY"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf21.hostedemail.com: domain of 3zzR_ZwYKCHgoqnajXckkcha.Ykihejqt-iigrWYg.knc@flex--surenb.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=3zzR_ZwYKCHgoqnajXckkcha.Ykihejqt-iigrWYg.knc@flex--surenb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736389841; a=rsa-sha256; cv=none; b=IM85Jt2htlDoUmV9D/d5DkfGGMg/APO8vQlFMKAj7oCjdkqONkPI+YZYdpyJlWi2rtqEU2 TINNtY4DIuYXxcV7X36zjI+f5G8w71ulaSU9zlaK75ovuTR3gk0I8nrcYYEnL45+9uA0ht 6zTWJICD5W2c6y/7wi1eOSXRFBEKuCc= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="LsbDi/aY"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf21.hostedemail.com: domain of 3zzR_ZwYKCHgoqnajXckkcha.Ykihejqt-iigrWYg.knc@flex--surenb.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=3zzR_ZwYKCHgoqnajXckkcha.Ykihejqt-iigrWYg.knc@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736389841; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=QrBwNMjEBhpbcxWqFk88nlV6Pfhf+HDin7YMebARszw=; b=PZiIZ0P0rEOpN2O6A/pUbUn4u+ZoPIy73pVSqYPXN4tjYSgkiCATCYinvj+yeMWdEAQfPc XDSCI2g6kWiDWAsxioZRTD57t39I6YW2Pwmeb55EZEF9FsEvXHCs77hQEQZrtywYhWd7E4 L90r26m6rhtBUp8y9FsG/sL8TMtWiFI= Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-216405eea1fso6759945ad.0 for ; Wed, 08 Jan 2025 18:30:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736389840; x=1736994640; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=QrBwNMjEBhpbcxWqFk88nlV6Pfhf+HDin7YMebARszw=; b=LsbDi/aYjxWHxQQnY3QFFjC7s713L8kti6YJE/5wq/1+dcWM9RG1fFRjyGk6UhRFSG rmNDK+nZlSi028nix2VYmQTaRUQO/L3IX8suKVrpJGSKPzYLK9DBQFqvUE0hlG0cpDaj cXTJdvgLrCNOcY+9MIv2+8yYVElnwaJQ7yNDDHfrGwHQ7hoqo78sJAbRrxL5eqIo8Hpz /B7d1qidRd7nfOay6A2x1LTOhyIN2LtFBeSRtoZ1fZQYKXo9rCiPedBzU4+jqZBm8Z0b BBS+K4PQ/PLoYMEHQpwj3BSGsiMDpwyAc+r440MbbDNhOpAy6CDevkMlMcuiqRH2Szoq 7JAg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736389840; x=1736994640; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=QrBwNMjEBhpbcxWqFk88nlV6Pfhf+HDin7YMebARszw=; b=JRdG/mgrWHDcnvZS7fR2S3mm5JkR+4YmnIdVJXWNcAtm4OLvVA59qnwDpXy9nCcklY x2tQdBvYb4YoEsTNwHLtRe1dCnPw9vNYgyj9FXhnjkpxfLyWVQFaE9GlQXt2adsiiK0e cNNS2qXMkOIvSPJ65k9YOKl43XM1CkJUSRnTfE53QvBez1ajPox+6wC9aNFScdJDXzEq D3LP4aBYNmUEZYjkKA5YjagmiNfwS3mFpXR5J9ezlCuNYMgWLEOE3RwzvlIbzjNRfQLT zARXPBD8wh2y8G6BhyCzgqUqUjM0zuDey4U7BiL+F1ehc+rk760Ghb4oGyLbFvee+XIe kTzw== X-Forwarded-Encrypted: i=1; AJvYcCXUZXkadlAjxPoRv99LhugLZfz8kmIO5fnQwPDxZGBjlggRsdFktPpVFa3ufT7QR1KiDttDrQ76Nw==@kvack.org X-Gm-Message-State: AOJu0Ywka9AbnheSACZI0YR3Yr10zVKt7+eMGF6MhewzoheTSmuJ02Za lSrLowfSCYz0vrFb8UgXGzuZ/CeUv9LvbTgHYLJCiZ0f1NIGin5df14MqF8Lbb9BR1KTvFJfCEk oWQ== X-Google-Smtp-Source: AGHT+IGCSi1cLKzQVkSU7lkundl6oeEMCOCXsG4KvMhn7UcrmY2u1vKpnTKufMOdZ0o1PI8HbdJQwt6UXwo= X-Received: from pfbds11.prod.google.com ([2002:a05:6a00:4acb:b0:725:e39e:1055]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:8403:b0:1db:ffb9:f359 with SMTP id adf61e73a8af0-1e88d128ee1mr9121257637.24.1736389839663; Wed, 08 Jan 2025 18:30:39 -0800 (PST) Date: Wed, 8 Jan 2025 18:30:14 -0800 In-Reply-To: <20250109023025.2242447-1-surenb@google.com> Mime-Version: 1.0 References: <20250109023025.2242447-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109023025.2242447-6-surenb@google.com> Subject: [PATCH v8 05/16] mm: mark vmas detached upon exit From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Stat-Signature: uaehg3fu6qwm6bqdo5kt3f1yujaoiq6c X-Rspam-User: X-Rspamd-Queue-Id: E94961C0009 X-Rspamd-Server: rspam08 X-HE-Tag: 1736389840-796603 X-HE-Meta: U2FsdGVkX19F2VVpuVCgLOGMaOA11RCWwk2HDP0LwN8naVwsnpig/icJhNvELuX6MzytUcylYVlZggd6V+r+rlo2xP1dMbvENq3NOxlEySdD7ZJj9fykr5dAwnNjwo3wGgDg1s2E9g8OgZFqkcN5algNwMCF2rfZGhl4sMTcx9azBFvjs8H5zKXGWjEDl4Hae2sYNGnZ2eUY6SEWf9sTgQdii11JaokNxwCGUSFO2xU3FduyCdM2JQY+/nmk/GgLapqorYAYaT2Y9psJLbmXmCyx9SBgw1fZFUBEXjqO2l2X6+rCvkrhxoGWrTVAbVH8RFNdpH564B6BYXatD8UE1X/KIEfJNFiLxzZ5mTnWaZRH3ChKx/OylPOjx/lTdWJ++/sa4+xRcXOn5VBohcyy1Bn1WLgwTkHxQQqkUaiCUdBK3vCSl06IiF5rKHMZsSXh2RZX8uDQpRcq/5aHWxXPixpvmdDwg75FhxulHF+ZmS4MCj+VScAPjbtkhXs/RfrRdMLh/G7tyDGzfKSpWWiOxTTG4jT9/Tt+UlJdumKHbV7hPs9jdVzFTveHGQcl7HJVyo0wBOVGHhemDFABufz8/ot/OnfwKatyAa+6OEBK7PmBsEd5Z8lSDzD1VKpkDLo4f5DMRmn3BGzZuYzgTlPMn6c59xXyJ1DGC/SyeDEVPYZKhNE9mQobwI5kVUfGcByH5HaLnjq6mLMC6g5EPUTHBdP4tHKxRn+Aj4GIjY2uyHOtcwWEWzgBy2VWInH4jCCVhAjnv3PjGvQdHLlx04nMXYOKoYsTYu85Ts72BALS3FDc351XREeYhxW0qZEQRhKRV7CcDfkF9bf170QZcVtg5+OaDh5+mchLvtS9CJagmbIqwItjQRqB7baaH5ai8DTuXOyzIRvGoI83iAqzcWt/91XK7E7H8cz3lenG4dT99i6DyeYVCOaA6dqrJsugiQ+PLW5UoRxJZTELrK4tSGr RLmXK8nd rlI1k4u92EEyIwo/NlN5s5OeIgrWyRX4XTAtnwBHbKsivae4VOCe0mS8KLETzyzkALWAgMeK1HR3n8e3706n8BTg9vYtFHAIoH246nuG6tj+kQJGMRzJE2DS9oF0wyLf77m1YBSnNa9SXFrw1vOEblvlQvEl2JeHtKuMJgJleAkWzx+JLc4tDYiIMt3kXUwWKH0yqIPcuMGNiirCQZO1u7OTkM1K6qqHI/m8ti8J68wd40eqP+hsTJOmAOq3/MK/nn2Jn5tjJLYIkUFuG3CTH/yWtZRM/cg7R7wa8vfB+MtCnc9/4btP8SsZLip0bsNarSaF397qgTqqBjXA2+pEXwGP7ki+yQln6+zfNb9BdUxfOwSkDRJiFu0iRu1gwzE/tcirbC7dqt26Z64wZxOM6yokJp8dBlGKMDeuL+ahTScJbqUrjFjRBP/F1XA2LH7FYsCMBCr/sUwN6w3yU5+4dB9Hj5/m958KOWO30123SMsp8m85BfumWvudUFe7DjbLidw4+s1CCY+8ZCbIpY6l+9dnuiktO0tbEy0nL2NoKqdwbSb99ybz+onJWUFfmZdzlxEV6G4h79kZZSLT6R8XQyB4DfE4KUH/eFuXs+RwYpri72lw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When exit_mmap() removes vmas belonging to an exiting task, it does not mark them as detached since they can't be reached by other tasks and they will be freed shortly. Once we introduce vma reuse, all vmas will have to be in detached state before they are freed to ensure vma when reused is in a consistent state. Add missing vma_mark_detached() before freeing the vma. Signed-off-by: Suren Baghdasaryan Reviewed-by: Vlastimil Babka --- mm/vma.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/mm/vma.c b/mm/vma.c index b9cf552e120c..93ff42ac2002 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -413,10 +413,12 @@ void remove_vma(struct vm_area_struct *vma, bool unreachable) if (vma->vm_file) fput(vma->vm_file); mpol_put(vma_policy(vma)); - if (unreachable) + if (unreachable) { + vma_mark_detached(vma); __vm_area_free(vma); - else + } else { vm_area_free(vma); + } } /* From patchwork Thu Jan 9 02:30:15 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13931813 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23B23E77188 for ; Thu, 9 Jan 2025 02:30:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E3C666B009A; Wed, 8 Jan 2025 21:30:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D4C3F6B009C; Wed, 8 Jan 2025 21:30:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B28E06B009D; Wed, 8 Jan 2025 21:30:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 8CEEF6B009A for ; Wed, 8 Jan 2025 21:30:45 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 2F68B121721 for ; Thu, 9 Jan 2025 02:30:45 +0000 (UTC) X-FDA: 82986335250.30.F8C9567 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) by imf24.hostedemail.com (Postfix) with ESMTP id 3A8D1180012 for ; Thu, 9 Jan 2025 02:30:43 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=ksrSxrG2; spf=pass (imf24.hostedemail.com: domain of 30TR_ZwYKCHoqspclZemmejc.amkjglsv-kkitYai.mpe@flex--surenb.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=30TR_ZwYKCHoqspclZemmejc.amkjglsv-kkitYai.mpe@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736389843; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=cA6SN2b+VepEpqi6xeamEUXAYwuTlZjCipcGUs+0HQY=; b=s+8yJHMRlzvGWM5iuv7JoDfbX61PGu5nd9xFiTtpbnypT6lExayRsA6F60YkdGjX/lrY/k cxe8iVfR4u0QFk6Rm3FNsMALksfnM0Bj4HMDuAM6AUAalfR57RUNRA9dbAkaaVba/mKpsg 7vnXKDixS4fDFUKnZydYsel27onG83A= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=ksrSxrG2; spf=pass (imf24.hostedemail.com: domain of 30TR_ZwYKCHoqspclZemmejc.amkjglsv-kkitYai.mpe@flex--surenb.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=30TR_ZwYKCHoqspclZemmejc.amkjglsv-kkitYai.mpe@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736389843; a=rsa-sha256; cv=none; b=wVGtxLbp4yFlIHORI1DTdIKaq6OrNes+oLTBqVtxCVuPbkvtuuVT5toH/6diLTKryGmnkO CQ5T3aSrG3nDS/RfHuzdpIf/vpHsSev+wWkYAbzwdH4WetwyXpwqThcXkkXnPpiCe4cJZ7 EuWkI+Se60AyObNfQ3wjNZh/uKY4ggU= Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-2166e907b5eso6859225ad.3 for ; Wed, 08 Jan 2025 18:30:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736389842; x=1736994642; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=cA6SN2b+VepEpqi6xeamEUXAYwuTlZjCipcGUs+0HQY=; b=ksrSxrG2fQNKgmTOM8Krp5kl8pWNmm0HMfQNwc2JA6VMtphUe2Rd2dX1pgh6EG4XvZ 9MTo3MDyfvXblWZkD5+haivPv6yVa9HjTz//7+pEtyFNf4TNCr5sDdWu4gZ0v2XLFQDo l2VuVzDaRf7gDSXlaYeR8bdf87Hcg13OGVC14eQpl/uW7ha8e385VaocWgN0YPv97/Ld HPsnJoOXta0A5vcQ7cO8PR4lYxE1RrUYGlZ+6ms7/Dy85c93zUCw/Dh28+B9jWFwzCFz pYIcUFFy8yZcC+JoeTgKd5GEnR7TGTLVGmKxs6SfMLSjbpeN+m49R3C5VYyIfcanQ2yy 1YAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736389842; x=1736994642; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=cA6SN2b+VepEpqi6xeamEUXAYwuTlZjCipcGUs+0HQY=; b=Ky95jjwB0RtjPhfxiiMSHOw8xwQdiqCfyTxCz7hfjNV9epUuxy73DZJ9hGzsF1FiCS xONxWKrvq/orwSxtBxNpFVrpUUZZKQ9fzO2uXWsnUWXxbCSu8tkk7qZFSAn2nqsN2Wwp bm/h1PTvxHyrPTdSGngmaiDVVhcsH25MtRa0pQMgubnECcoPlo3pP1fpm4Ptj+P0gnOM 5ZqPEMa/WFY5ulQe8pqwH0/1m4pQI5gYiMrgP8KpqelyMspgTQ0LXT3I67UlQY3z3yCR scqyB9EQiq53kRGP9p931oUVSy6VmMZSXuE9XDiw91a0j4s0CD76yilcOJlXbd+A48yb 6stA== X-Forwarded-Encrypted: i=1; AJvYcCWLXWUZQ/1HQpl825TZbyJWUFTnQYDcFRyHEfACBnZqbkhW08padJzVlEJE+3EwziPSP9Imakn7Dw==@kvack.org X-Gm-Message-State: AOJu0YzjDgUKpuYJwKNw8wU4Zie00XjNAjm7Zo87Hwvcajx28hRpcWuO GswOY4pIAT+nrjO7RYDX9GCtlNk31Ma1cByqlPiubKZJT86ZlpjDCWQ1x88Jn1GJtxmz80gq8a5 VUA== X-Google-Smtp-Source: AGHT+IEzXiHq8FNMI7G+fgWP1KpPrFXIPUQsCePye9PY3Gn0PCmDCaiyOC9LF6jJI5g/L9SJ03+t7trrpGQ= X-Received: from pgcz11.prod.google.com ([2002:a63:7e0b:0:b0:7fd:56a7:26a8]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:ec85:b0:215:6fcd:6cd1 with SMTP id d9443c01a7336-21a83f43ae6mr56359725ad.7.1736389841928; Wed, 08 Jan 2025 18:30:41 -0800 (PST) Date: Wed, 8 Jan 2025 18:30:15 -0800 In-Reply-To: <20250109023025.2242447-1-surenb@google.com> Mime-Version: 1.0 References: <20250109023025.2242447-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109023025.2242447-7-surenb@google.com> Subject: [PATCH v8 06/16] types: move struct rcuwait into types.h From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com, "Liam R. Howlett" X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 3A8D1180012 X-Rspam-User: X-Stat-Signature: t9jar4zuozqehe78b9hjkyi5epqz5ewz X-HE-Tag: 1736389843-875836 X-HE-Meta: U2FsdGVkX19Jnw734Npr54+7nz3AvMbtWGg/lo+jup1dx1uylKEwDOutSy9m1HJaFBajE3Gc0rty7/dtwIBqWOTiifDrwx1jRObFGM0x0V9vM4AjncGk2WcQCLStKa87SBAA108tHPFIZ6ZdpYlqx/5Y2HjolRKiIaW8QaNvNQvC9ui58NPCdP6jlcYIpAhuSoBkzZrb2rLyIvIKRgzcCYpcN1ITNjRL8zGv5sKBaLnyEdziGixN+yo809ckBvGbv5/jYve+1iyNk198KSsSMEQw+Td6DIRydtfC5NkStnwIiCPUINg7qeOjqi7QAA3MIX8cfkOf92UUYwzmuMPJIGeaDXyCP166Xd4WTuuEBk2XhIS8MOrd5EYvyE08rR3/AaZNtGEEcYlFnyNYJqPMMWKvyrNVGdtKZ6w6SXz8T1PsC6OcAyP+IueKbIwOxOLHx/ohGjnfk5ulWU3iU2M1j2Mwlg3jKafPpb8g0qqMFGjd9yLbfjqv5WGiQMPGDuWjfXzi36QNgxZlGdz0z/m0DckcG6nYdYhFD+hki4+ihQNc/CmpxKspycXILaexGee8BoG2l30rpVeltaUMfhrVjztYow/yBRigYUH3GfWzgC0dB9ZDiYF3rXwIXQ8btN+TXdLeA2tV7oK4HkP7o8waHxjUsejuiHS5iPeUijA1SydhSzunSVJ5UnVirCLyW8JQOcjha9JixHASmrutfrsT6oLqkRNvh20LuGNsMJPB7l2jhz4hS9w4vk2EI1Xatgi5gZBFFnOboGK9+EwLGeBGaRFIkXB8FT6G+p4DZAULjxAkciUnXcn12lLt7CBy8Bgb0ejJZgFTTQWmvGpVXuwLKAKiLLsjZJNBWlWQRQvTcpdaHSpFu+dLPnnKQ+X3AgoAdDk0TMhfLSvrCe4ktqR2tlNINUbJlgrdqLFJDJTv42x7wq7HVVMEnaatGd9aZpN8DXY9eijwNfMGWE+fD9c wRzJHwrH Kq9ukObEBuRW+eA3kduzsTI/pYysmg7bn61VhpEYBDQcBIL9FkbEmUO5sJiH/ZlAD3dpjAO489OkPXyts6o87zIjAdgPo/p1IsdiLos3N0YSmS2/HzmbMY/qfprsR2UIH+YI6pvasAXt2feMVK0NzTjW7OhsDZXnXPoceV+dzRDvHbulnYX+ewl8K/oruld6Iv+SaC5aL6Yz7h/gZYBI1EFNQO0Huj3Nn+heIyFgKS/NIJVKoh8sUeBpqTgHZRpBxlkTBus/S7AtFxwEx6J32wfqhUsDW49fwaycDjUr09bPSzYhwU8amwroKSxNo0JBOF3gavIb8e4Fcv6dKHZHH0hbOAXJq7odb7RIzH3Wkj6zFr2cbh98A5stws5weh7uRKu4qFEF7BLMnPMuQ3k5rELiRf2Xw9IeekAn28GyqfKo7rgoXFUFnDprgTXJ7YCduNJwqTqe0leWvyMNsyLO2WqW6dED6PnfoKXnDv5Dw8VibmcA7LoP9CabU9NIl4QPMDvhT+SLL4J4wl2wd5wDpRmYmA+qL7Sd7KkuqsGET56yqF2PsCbeopepQRYEw3HB/+IEA7f4GoG08UtPrQUoNxYuCawe/kF1o91+iIIu3fOjVgb6CERMgnPF20J9a5gRPMMMOM/6ybG5PQgU1ub9r5aqv8OY7/ItdETtEx61dPkzNKdYapvYAf3nWdDVGmPCPANmiuawjFczdkfXV/u6MAeU4U2zFX8OMIKlwA9XQOhVTalL3mdO4rtNIaY0lMCR9I4eY X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Move rcuwait struct definition into types.h so that rcuwait can be used without including rcuwait.h which includes other headers. Without this change mm_types.h can't use rcuwait due to a the following circular dependency: mm_types.h -> rcuwait.h -> signal.h -> mm_types.h Suggested-by: Matthew Wilcox Signed-off-by: Suren Baghdasaryan Acked-by: Davidlohr Bueso Acked-by: Liam R. Howlett --- include/linux/rcuwait.h | 13 +------------ include/linux/types.h | 12 ++++++++++++ 2 files changed, 13 insertions(+), 12 deletions(-) diff --git a/include/linux/rcuwait.h b/include/linux/rcuwait.h index 27343424225c..9ad134a04b41 100644 --- a/include/linux/rcuwait.h +++ b/include/linux/rcuwait.h @@ -4,18 +4,7 @@ #include #include - -/* - * rcuwait provides a way of blocking and waking up a single - * task in an rcu-safe manner. - * - * The only time @task is non-nil is when a user is blocked (or - * checking if it needs to) on a condition, and reset as soon as we - * know that the condition has succeeded and are awoken. - */ -struct rcuwait { - struct task_struct __rcu *task; -}; +#include #define __RCUWAIT_INITIALIZER(name) \ { .task = NULL, } diff --git a/include/linux/types.h b/include/linux/types.h index 2d7b9ae8714c..f1356a9a5730 100644 --- a/include/linux/types.h +++ b/include/linux/types.h @@ -248,5 +248,17 @@ typedef void (*swap_func_t)(void *a, void *b, int size); typedef int (*cmp_r_func_t)(const void *a, const void *b, const void *priv); typedef int (*cmp_func_t)(const void *a, const void *b); +/* + * rcuwait provides a way of blocking and waking up a single + * task in an rcu-safe manner. + * + * The only time @task is non-nil is when a user is blocked (or + * checking if it needs to) on a condition, and reset as soon as we + * know that the condition has succeeded and are awoken. + */ +struct rcuwait { + struct task_struct __rcu *task; +}; + #endif /* __ASSEMBLY__ */ #endif /* _LINUX_TYPES_H */ From patchwork Thu Jan 9 02:30:16 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13931814 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D238BE77199 for ; Thu, 9 Jan 2025 02:30:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B056E6B009D; Wed, 8 Jan 2025 21:30:47 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A8ADB6B009E; Wed, 8 Jan 2025 21:30:47 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9047A6B009F; Wed, 8 Jan 2025 21:30:47 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 62B746B009D for ; Wed, 8 Jan 2025 21:30:47 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 2E9CE1C81D5 for ; Thu, 9 Jan 2025 02:30:47 +0000 (UTC) X-FDA: 82986335334.20.5DD1643 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) by imf19.hostedemail.com (Postfix) with ESMTP id 4F7E01A0020 for ; Thu, 9 Jan 2025 02:30:45 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=uwZGWRx4; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf19.hostedemail.com: domain of 31DR_ZwYKCH0tvsfochpphmf.dpnmjovy-nnlwbdl.psh@flex--surenb.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=31DR_ZwYKCH0tvsfochpphmf.dpnmjovy-nnlwbdl.psh@flex--surenb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736389845; a=rsa-sha256; cv=none; b=KNVIb9H22aUbV2+a4ddM9Yv4qy4yoEQm3XHTN1NFLJZoojLf6pXaTVjtKG2OmCh6HbPzAf D+Fu5DlQUzJpGPS3hkYlR8kBN2S+M+3WyAECWZkb4Z5Y0ez123ruGBA9U3preFI6x32kRv Elg0G2L7kMPf8QVD0gScjdQj2GiFQkg= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=uwZGWRx4; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf19.hostedemail.com: domain of 31DR_ZwYKCH0tvsfochpphmf.dpnmjovy-nnlwbdl.psh@flex--surenb.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=31DR_ZwYKCH0tvsfochpphmf.dpnmjovy-nnlwbdl.psh@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736389845; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=jHfydvSHQ3WxL2+aQStFiDlG77OuqV7nJSSFFl9LU/E=; b=VlaFqHZyEQcAXd+r8HQQu8oMWAmejaRBXjjaI85/P/W/8NOM79fmclslIgNbRw9XnDIGfL 3INan6P2R6hMa5KsGURxLYcJBxz9xUeSnByjlZr6c7QvtTfi0FkJ1HAxn6KKu10nmmaJwa CtUyiknqbcwM9f3+EpfK5SQYfeJT3+Q= Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2ef9e4c5343so1187989a91.0 for ; Wed, 08 Jan 2025 18:30:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736389844; x=1736994644; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=jHfydvSHQ3WxL2+aQStFiDlG77OuqV7nJSSFFl9LU/E=; b=uwZGWRx4QWQIzOPxEtm5VqPjW2IS3wBug1l9iawxfYW91AGRRxGcHDWgNPYX9G1vF5 XO/zks3KNj2qMBRRM7KNWeWde0UcCx5h83isi4cWDqhOTbjFiYlErnSgyPmk31cY7qU/ O85cUXE426CZuGMjJcK64D1sjfYuX7MeFKf6ENs/AGypISQUaIZ9h+Cya/qTpj6SIBUc LN6xaF5ys4DjosSXnxgJmByIFCBqZXnjBR3Gc0ex80x54x9JZrIAzwA0Ed2Eg7tLhFPr OY1JmrTusG6bFKQLt6ybqD9O+c7UkJ8lytUPLMApzaDTH9bRadM0fou/dvsmbsfOJOGT mSzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736389844; x=1736994644; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=jHfydvSHQ3WxL2+aQStFiDlG77OuqV7nJSSFFl9LU/E=; b=JHz/gcxM4t96q2PMEawszO4IhdBHUQcoopbBB6UZ+Ukshaz0sU4hd0CQ+oANTlvQQg Nfy98B2Thyi9GjDATfVJ0sbgI6IinlyFCKhLQqwrDO/dmm03hgsKp1wiVAsKenKcaGxk wJ/MBamGtMuBgv71kqyjNCoVwB69dANq0Qgiyug3Y+a+WoEWRVzYsny8xg+8cEAf3By2 0Ib9vCHsyyIYFeYGIs972jC6vPCcC/fKAlTMRNCzS3mt3akIFBNzvTdO0Xth3BCsvwpP fl3eq3FJw1QJlfNMzlZiHk9irjijV9RQZCtO/SQjkC4FTkvDYQLo/zh1b/xYW/ieEKku BmuA== X-Forwarded-Encrypted: i=1; AJvYcCXy9Yy+EX4yKQ7XG4JC71dhT+iHQj6LEg9hwM55Bsi0+u8nadn74qa7iUD5R/rlK/qAaO8hb1VzFw==@kvack.org X-Gm-Message-State: AOJu0YxVAMB2JJ7AqT6HoU6gVT98kGMk3PlFScrLKVFPyi41yLPmhd/h cDOt4mTPTRoArY3bZ/ehCZfudR0ShplUGRrY+A+NQM9Pyp+OTcvUDc4k1g4WGxHNyKGCN9MHcRN BJw== X-Google-Smtp-Source: AGHT+IFV1+9gbXGa72+TRMzfljX8nTas7v1/UQOwAWuQbBD3PGKS+ix9UpG8sQoR4NwvEysafdLPdictJdI= X-Received: from pjj15.prod.google.com ([2002:a17:90b:554f:b0:2ea:448a:8cd1]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:5686:b0:2ee:df70:1ff3 with SMTP id 98e67ed59e1d1-2f548e4d0a7mr8382370a91.0.1736389844102; Wed, 08 Jan 2025 18:30:44 -0800 (PST) Date: Wed, 8 Jan 2025 18:30:16 -0800 In-Reply-To: <20250109023025.2242447-1-surenb@google.com> Mime-Version: 1.0 References: <20250109023025.2242447-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109023025.2242447-8-surenb@google.com> Subject: [PATCH v8 07/16] mm: allow vma_start_read_locked/vma_start_read_locked_nested to fail From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 4F7E01A0020 X-Stat-Signature: 3yk8ai3suo8f81xsuwtsb7jsckghuykn X-Rspam-User: X-HE-Tag: 1736389845-85711 X-HE-Meta: U2FsdGVkX18bCYsbD91aLjWYQR84fMG0LSN9ydadJrbI2+9uw5lONMUdOWRFSbJF30FQkPZHqpPbM6H/YriyOH1p2foWhPkKiC0TRLlZDu/Ek10hUwdzt0IUuS+ZGm9TJZ4gvXlJZkK+VEPlNP2zoyq3vQUCnaVSapkdzh41TWdh2etD4ewix3TtxpgAq+Gtl8AgYWEi0ia6iSYYngfD/c6o67vRhX7+Gl7tR37vJ+NmYikKp6AnYniSkCeIywNc6ikFF5UPEuwHfvns0Xeawt87cbWWT1XhYG1c0HXV/rA18VGmuPgVc3+dH9MkyNYivb6vh3pnC7iDyYQ1th1GWziCYizrJkbMRkNHd2xcsCvVvT7CevbO4iym5k0V/kbNqbu8IP9zicCB2G7bqteURP3jQoIvgU/zcIPAk7yO72kqrGaDaawh805L7kXf7BhadphDCm4YwoDNogkO2MR0raLc+S+7XgGX6vx41WL+FpHHTXAf2x0rNT5IsPGVGZkq/slYWRTk34pSNI7mYszlN6O7qsf+vbPs/62v6Z/+mPzis8bGCi+okijdqz+QGnw0FP6lawk4j7b84NqND9JZ6W9K/PqGrg88dpHdmGxBhSApixmBTEetFhOUS31bR7PaxgF4R6gLsfcKuH9rg299L/ASylOKH3Kk+qYUl69Fo6KsYH6nCBQ/KJPPU2lGZpJjV2OXH5oduUXv4p5SvX2yZQTHSEfi2Ia+deXFBzNdIUOLiwk5DAPA1Z+uYW40rOVtjCAn99UdGezDX5AWR8PaFB/c8L3x2ixIpPpwBO3GnAlIoS2CktNVwLhRI1M8Y00/fCICe2WkVQ8lnbDvjVG+x6AKkb+SaZhvQhZyye/uBDPo68RnmZe5rkEzod00dhjWo86yLhZf4i4zUYY/T+EhvTmnkwR5D9n60uZlemioSeOrFBmPh9IPvoze0p9fKDP0dnGqFN2ovk5Y0nvZ5/r Tlcyw4X8 FJvJ8lLZHq+DQ5xLN/XEHk2hN21RpzVz0eT26Bt1SmBS1cpHhX8RqZMfAHI4oOk4JTw/U3korVhKAfOtug+PEKohyzgPp2riMoSwRdyrdFEMn5z8V0LnekgNG/vgzgFROESulFN8xZAh7smAZ854+VU7oi/MEESR9L60u98uOMGeiLCCFxlxdQ3D676HqbaxnD63AHyDuLXJ86/aLsZF3iyXXAjdvfVmgjtEpI6MR3QmRICJfpQ/64AFLeB1/PlYIsBGRxK1MszKnvaZLaY19g8mnTqLmGBhfBkLKCMEvxdV7ZhVWnB5J2n8SC6quQobJ33QKESP3izGv9LW8TEfKPvcBCJu6SdjvpNYgvwIdPfRV5bJiKQ32VzBFuN2WG3LN+9CXj9LdjC0ASUbqXeXrSDqjjJQcrDmu9PQe/BG3YYIUB3khHuJUtjTL6+9PZ+aqvt0GNmEsVX/XQxEeGbcojh0WvUlTT58pdb700+iD3E7t0jlij/JUBGCroyQ/r25aHQP3jXI7swKvwhaWONHQPATpzhP25hiopJ7fOWPWM8zqZYAATRq1aTjmq4dP38pPrEZiCyfTnGekFjhNglvRdf+c8bnIgRwGxcqFjWQbUsWhoJcOolnyXit6KFYssVpiWZ41/XKa5TZqLI719BXHPwgUUIH9mB9efR26 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: With upcoming replacement of vm_lock with vm_refcnt, we need to handle a possibility of vma_start_read_locked/vma_start_read_locked_nested failing due to refcount overflow. Prepare for such possibility by changing these APIs and adjusting their users. Signed-off-by: Suren Baghdasaryan Acked-by: Vlastimil Babka Cc: Lokesh Gidra --- include/linux/mm.h | 6 ++++-- mm/userfaultfd.c | 18 +++++++++++++----- 2 files changed, 17 insertions(+), 7 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index e0d403c1ff63..6e6edfd4f3d9 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -747,10 +747,11 @@ static inline bool vma_start_read(struct vm_area_struct *vma) * not be used in such cases because it might fail due to mm_lock_seq overflow. * This functionality is used to obtain vma read lock and drop the mmap read lock. */ -static inline void vma_start_read_locked_nested(struct vm_area_struct *vma, int subclass) +static inline bool vma_start_read_locked_nested(struct vm_area_struct *vma, int subclass) { mmap_assert_locked(vma->vm_mm); down_read_nested(&vma->vm_lock.lock, subclass); + return true; } /* @@ -759,10 +760,11 @@ static inline void vma_start_read_locked_nested(struct vm_area_struct *vma, int * not be used in such cases because it might fail due to mm_lock_seq overflow. * This functionality is used to obtain vma read lock and drop the mmap read lock. */ -static inline void vma_start_read_locked(struct vm_area_struct *vma) +static inline bool vma_start_read_locked(struct vm_area_struct *vma) { mmap_assert_locked(vma->vm_mm); down_read(&vma->vm_lock.lock); + return true; } static inline void vma_end_read(struct vm_area_struct *vma) diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index a03c6f1ceb9e..eb2ca37b32ee 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -85,7 +85,8 @@ static struct vm_area_struct *uffd_lock_vma(struct mm_struct *mm, mmap_read_lock(mm); vma = find_vma_and_prepare_anon(mm, address); if (!IS_ERR(vma)) - vma_start_read_locked(vma); + if (!vma_start_read_locked(vma)) + vma = ERR_PTR(-EAGAIN); mmap_read_unlock(mm); return vma; @@ -1482,10 +1483,17 @@ static int uffd_move_lock(struct mm_struct *mm, mmap_read_lock(mm); err = find_vmas_mm_locked(mm, dst_start, src_start, dst_vmap, src_vmap); if (!err) { - vma_start_read_locked(*dst_vmap); - if (*dst_vmap != *src_vmap) - vma_start_read_locked_nested(*src_vmap, - SINGLE_DEPTH_NESTING); + if (vma_start_read_locked(*dst_vmap)) { + if (*dst_vmap != *src_vmap) { + if (!vma_start_read_locked_nested(*src_vmap, + SINGLE_DEPTH_NESTING)) { + vma_end_read(*dst_vmap); + err = -EAGAIN; + } + } + } else { + err = -EAGAIN; + } } mmap_read_unlock(mm); return err; From patchwork Thu Jan 9 02:30:17 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13931815 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C2D0E77199 for ; Thu, 9 Jan 2025 02:30:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EB0476B009F; Wed, 8 Jan 2025 21:30:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E0FBF6B00A0; Wed, 8 Jan 2025 21:30:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C3FEA6B00A1; Wed, 8 Jan 2025 21:30:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 9CD976B009F for ; Wed, 8 Jan 2025 21:30:49 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 5C9B7C0C61 for ; Thu, 9 Jan 2025 02:30:49 +0000 (UTC) X-FDA: 82986335418.18.F84337A Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) by imf24.hostedemail.com (Postfix) with ESMTP id 8B82818000E for ; Thu, 9 Jan 2025 02:30:47 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=c+6yhexW; spf=pass (imf24.hostedemail.com: domain of 31jR_ZwYKCH8vxuhqejrrjoh.frpolqx0-ppnydfn.ruj@flex--surenb.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=31jR_ZwYKCH8vxuhqejrrjoh.frpolqx0-ppnydfn.ruj@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736389847; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=mjOxH5E6F+PRJReyBLOWVSfVgJNT1AsQeggvw/y9xao=; b=TbTDyuyS7MltoYSFULyOdThy1CiCpP7yqhFdVtutuj/RHYt7Xw+IUh4K4uFYZJ7X3b8zB+ 2dOBloaCo/hDwxujVXmXulhFbqw+6+roi9HBbbrikGJeocgNtCBXX933SqnypTftHMgQxQ yTLzpZ8jmk9X9QmtRWf7svwJv9CllOk= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=c+6yhexW; spf=pass (imf24.hostedemail.com: domain of 31jR_ZwYKCH8vxuhqejrrjoh.frpolqx0-ppnydfn.ruj@flex--surenb.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=31jR_ZwYKCH8vxuhqejrrjoh.frpolqx0-ppnydfn.ruj@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736389847; a=rsa-sha256; cv=none; b=7NaRQUkS0Cs1UYHANrtPx1lcpQfkPGYAhe+rzb4zJI0yfZLktmmF7PIa1EbzjOWVTfguXu 5VBMSRlHoj+84JHvuCQZvZ9sne24osfxD784D8ZCSztGO8zppL9IJmhYQ1l1wrXeX41MRZ J9rhXLaDRKaj42FRJFTgDPOBkA9UFB8= Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2ef9dbeb848so812663a91.0 for ; Wed, 08 Jan 2025 18:30:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736389846; x=1736994646; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=mjOxH5E6F+PRJReyBLOWVSfVgJNT1AsQeggvw/y9xao=; b=c+6yhexWeX4YokW6+YuwIqhTpGPZcqYXXpRra4O0vIRhBeLsEWKDWRo7sCWhU7p7KU P/f7RcZ7GS22p4pIfGDyHid06yC1KLlDaw4biJKqs2vjilK2dXIwG4/jND1VrEwvAqHn uRtILaAI3RhLDU+c79qg8UE9/U660wu4T748XQj8HF/3XYYiXUroeGhuIXicPMuc/aEX 0a1olcV1pOOqTs9IqCy2YN3R0FnQ35eINjfHTwQ/LYRRyTnfLN+Ck0dk4epnFAdGFP5F wlKPQ9VLAF75/QYi9yMh+Vi8u8MoULgFA3cUYmtwD3k+TMJaqOvUCRaQ2BzNeemleCVt bAtg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736389846; x=1736994646; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=mjOxH5E6F+PRJReyBLOWVSfVgJNT1AsQeggvw/y9xao=; b=l4IsRWnkNFQFlP0uOTHezKREYEd1aNW6DKRFaMhiAz9X0zdI8iylz9vrd26oXsN0FP HPW4/1Ha4c4d+nt13IP829uYoFX7thCLzZc/IkHAwFGLdIYWsPX59qTQkQlOZsjqiZH3 0qiU5ekmBHgc3VcLmh2KDbcP3fWT8Ib40yOltA+U1Rp+bkFjznomy7sIcC6RlysWRSAw cit6GapU4QC9Hm73xkHIjLDlmmjZwdYA90Q2OMguTH7/d1JiIw7VZGr+1Lz5sJqVMJyv 48J8twPV4Iuk205dVfbjkdc5FJi7mqO8q+0NEeNciRNTpS021n8TKPY1tuiU9BlJMYr2 TBHQ== X-Forwarded-Encrypted: i=1; AJvYcCUJD2T4k9QmoGACWpgTz3GMvkNKPQeuhkMTgdT4u0xYKHC3ii9HVDqJTjfL98RFVNLQv4Hh+kaMqw==@kvack.org X-Gm-Message-State: AOJu0YwInQiBRF74cJVGIuxjMgEB3kwMpHyWkS97LdTbuN1XpSbi5U9P yn7gQ/5GemH5n1dYGShlPZIbldA96rRmsRe69Qf8F/YBvyVf110QI8M0bL3zlrGjc4YZGlYOUty zmQ== X-Google-Smtp-Source: AGHT+IHADv1eWMN6DWgWw8q0R38itFS4KVDDVQ6L3paem0ou1tKWUJt5aWqFRQtQ6rnHlKZNXiGFJ2Bl2n0= X-Received: from pfxa8.prod.google.com ([2002:a05:6a00:1d08:b0:725:e46a:4fdd]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:ac2:b0:725:e1de:c0bf with SMTP id d2e1a72fcca58-72d21f2dcbcmr6756667b3a.9.1736389846286; Wed, 08 Jan 2025 18:30:46 -0800 (PST) Date: Wed, 8 Jan 2025 18:30:17 -0800 In-Reply-To: <20250109023025.2242447-1-surenb@google.com> Mime-Version: 1.0 References: <20250109023025.2242447-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109023025.2242447-9-surenb@google.com> Subject: [PATCH v8 08/16] mm: move mmap_init_lock() out of the header file From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Rspamd-Queue-Id: 8B82818000E X-Rspamd-Server: rspam12 X-Stat-Signature: wba9fcdzjyamz3kc9tipxza7p6o7wkmc X-Rspam-User: X-HE-Tag: 1736389847-647359 X-HE-Meta: U2FsdGVkX19VQwbdZITExbJZP4DS1PLKfSNYUZauLRi+PuHJHiAaGcieq96YSz1Lah2zQNe9mdCide4+IwMIxXuuXSQiGIC0jR8nG/tPipaDTzU0sGacmwzdKaaDMPHVIXBVgMs/mzQVup7BddC8O4+Q+ZLZZTH5ofUz96SgWHh/JPYMqVlYo3+8ZpPC/sH8plP7hK3LtzshzTc8Zg7oP8hrVCF0tSDBvzqx0V6e8i1Jf/yPsZ6uPASNKork7fQRgka3Dej3Xy87nLrYZp4yVMmxJtzCXHd+24pe0BdLTliv3c8CGEQuLf3n/F08WmY9Ue+YJcncx61sEvcOyiAWWQdbv8Pb8xYxkbPD6d3S7eUsWMavZWTCzePP/Bc/QjeUjtYwbMs/Kjz1saOHcmQsy4iocihFTeIoULl0Gk3HUAJLvpKgykj2S9pog9+rRFgB/cUp3IlewoD0c2ZA08lDNGlStmtyPO8dXKhdUQQMfL+dj75JZ8tfs8bbzZPZk8vaqvc56i4pKazMIkr1xouZEMKvQTTeoNEtBk5EKCLkF00HH6G11resPvvwZoRsuxiwhvu1VSED6REmNPoT9LGowj18D3wkpL5XjXVKZEkP/IhbJzWr9Z2/Tbv/GbVdX8SBoT3PgeE0DPC6FIXYXy70WhsR9q1Wg+MXUZlojFMmADRXD7KzGmtdfPyKDuuQKJEya1+O+FjLJlI3T4B43k+nWuwozdmpNS78srq0oY9sAgMw9lhAWWjtmt9w7QFPSpGrfZDKgtwYjyP3saZb+KBM76gVGJh78g5Il/PB+x3DEYG44pHhxGazO7PiPooP9GuMjBh7xLIU2717qfESkujjefzKknpFIbSDyRMQ5Vxfa+eOlizjmuPYE3sdBkw4jWMUGCMm0dmkWJrCAi93SltyipTcJ8nuZefuSGPG7NCWFY5Kq1BVkzi1Nmqbs3q4rD57bpw2xTgw0FtfXhi8tUP 3LWTh+YP XuS253SbJ9rZG5MLZJbeSoDvpJSpsQARk85k5rwxIZzeEayrfZti4R7mZoCsKbiT17KCvYbqI6PxNn3mOBH4KyFxH9Ft0Cj8GxCNJIBkdg5eUC05Rc0tzy5vdkE1svon/9DJ6VGKYdhJJwhJu8+Wmu3TFu81T70o0e8B48KEndJh97Yayy9Rd2ZxeL+r0p6Zk/3u8APoJ4avcE1WgKnpmRo5rpWaesOlUDR2C88a6ua1oitqzf/gSo38RSxMnv+CQ+4UHFoFQ65x77z3sDcJckM6PMEVHYWAd0tw9uMt6jQ1i8/R3X6tKwCS6aTayySHd1JWHsaWenDbpfO9WU5IIL5ZwhCiMwJrZAWNTHNO9rv0iNf/4rs358hjESB26ZqtrnXDh6y4p5UCkslvPuQwhglJajjC4g5PRg1XYnlMr6J8pzGlujhptZ0EP575dpoMZGWvUpxCiVL0GTFI7eHxhXxQp5FZA6iy3i2S+vY90AxLNhH8zLJG7HnQh2Z3frUtU5CzpM9P7jAVYqr4TTxdlH6bSU5cCNVkmXNg7Ae6egMvD6BrPpYNnhlLbf5kfvHjFYaHP0jcmdaQpkmENfibHZVTZldqfOPT/+OHuygk9SvUDI2rpaBfcDxWUbj4ctHbwF0U8lpKl/fC64VFY7FdJP1QG4phUWbEr3h6g X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: mmap_init_lock() is used only from mm_init() in fork.c, therefore it does not have to reside in the header file. This move lets us avoid including additional headers in mmap_lock.h later, when mmap_init_lock() needs to initialize rcuwait object. Signed-off-by: Suren Baghdasaryan Reviewed-by: Vlastimil Babka --- include/linux/mmap_lock.h | 6 ------ kernel/fork.c | 6 ++++++ 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h index 45a21faa3ff6..4706c6769902 100644 --- a/include/linux/mmap_lock.h +++ b/include/linux/mmap_lock.h @@ -122,12 +122,6 @@ static inline bool mmap_lock_speculate_retry(struct mm_struct *mm, unsigned int #endif /* CONFIG_PER_VMA_LOCK */ -static inline void mmap_init_lock(struct mm_struct *mm) -{ - init_rwsem(&mm->mmap_lock); - mm_lock_seqcount_init(mm); -} - static inline void mmap_write_lock(struct mm_struct *mm) { __mmap_lock_trace_start_locking(mm, true); diff --git a/kernel/fork.c b/kernel/fork.c index f2f9e7b427ad..d4c75428ccaf 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -1219,6 +1219,12 @@ static void mm_init_uprobes_state(struct mm_struct *mm) #endif } +static inline void mmap_init_lock(struct mm_struct *mm) +{ + init_rwsem(&mm->mmap_lock); + mm_lock_seqcount_init(mm); +} + static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p, struct user_namespace *user_ns) { From patchwork Thu Jan 9 02:30:18 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13931816 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51D0FE77188 for ; Thu, 9 Jan 2025 02:30:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 33A576B00A1; Wed, 8 Jan 2025 21:30:52 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2C3E26B00A2; Wed, 8 Jan 2025 21:30:52 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0CB4B6B00A3; Wed, 8 Jan 2025 21:30:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id D2DA96B00A1 for ; Wed, 8 Jan 2025 21:30:51 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 856FE12174B for ; Thu, 9 Jan 2025 02:30:51 +0000 (UTC) X-FDA: 82986335502.22.BE7B14A Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) by imf19.hostedemail.com (Postfix) with ESMTP id B2AFD1A000E for ; Thu, 9 Jan 2025 02:30:49 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=UEg4sBNs; spf=pass (imf19.hostedemail.com: domain of 32DR_ZwYKCIExzwjsglttlqj.htrqnsz2-rrp0fhp.twl@flex--surenb.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=32DR_ZwYKCIExzwjsglttlqj.htrqnsz2-rrp0fhp.twl@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736389849; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=IjUQkFQTPO2uWvItBGSJ4zf4/1EWZkyRsCZwETlqeC8=; b=CA8lCk2bhbk52Mixw+b7jew58rXmyq6Qz72t5RI0Mr987o7cVQehCqyn11f/RLf85K9o0y 0QTTRNEG0Z3Z9k6Qf83En12/WhmiKnV0PywPFWd4Id2YhIwKSIA8AW/7FgmJXJDLmKLdEW wiPIs4wUSyIMKgVWlck6Q/dnv2M9Ab4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736389849; a=rsa-sha256; cv=none; b=Jjib3qrFGLzUYbflOiyc2ImqRixxOU4zI8hffRfA81lBDkHFz44c1M7440s2xc6CYhTj3e 5Cmtq5bF54zcCT9+prkggGSgjwgf32vqGO7OGDUB6cfObm0Rzmeh1nS83zdshyB2hJoy5X FSnWIfG7jOhnMliDBu+/L/Ih7GvETec= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=UEg4sBNs; spf=pass (imf19.hostedemail.com: domain of 32DR_ZwYKCIExzwjsglttlqj.htrqnsz2-rrp0fhp.twl@flex--surenb.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=32DR_ZwYKCIExzwjsglttlqj.htrqnsz2-rrp0fhp.twl@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2efa74481fdso836377a91.1 for ; Wed, 08 Jan 2025 18:30:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736389848; x=1736994648; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=IjUQkFQTPO2uWvItBGSJ4zf4/1EWZkyRsCZwETlqeC8=; b=UEg4sBNsEm6ZBGno1Pd+62B0aNLErFhAkE03GQn/p2j2kScTC9vwAy7A7IKDSBDB7M 10Mr7+d8T2h5AXfyPcTpR1HwJ8Isx9otq5qFXQZFEFVmGQlWDNkpzbB2LIdmkgm1e79X G2/9YKFFkNxmpNNIMABrHjhUoTztMEAD8LDKOeIkGPO51G/5jhFcQ81xLS/l27EJVQpW R4j85gOPD8UU1sM7w+wV6J0uNRHhjpfKM29pvcfUJYb+PCynw8YadbXJrySGhm5FCmcB +H3mNtScOWfP4EH5dcDee+TIYya+fPLs/0/fwt4LZrcCM7TwAmOBQMfR2eZIVkRY3Alf NYeg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736389848; x=1736994648; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=IjUQkFQTPO2uWvItBGSJ4zf4/1EWZkyRsCZwETlqeC8=; b=vpVGNcFc60YjyCDRHxO1w9bjr9huyOql9BEyV/5dnHyiN5Q+E4DuoHaSodNKqo+rom 41zlj+KLadOiAddhkdOx46ZohtQs1+GyjbTe7V2Ixhf6WlmlMQOpPQoznvvy79XxdXqH 5pVR+Y/6UsbV+cGiUoR5YizRdMk74Wj4MSBGND1AS3gL+8967TZcCect85aXFsTglluK hfVxo15z8YgvxoaaTOw8nWKkNjzUYCr74ztMBII1MrrXFSRI5PVueoDWrbqO3P/BOHdt Q9DMhcTZj8r77Rb/h6iBhwasTqs3MdMVKHVUXtqFTi1MIoTU+rZTZvEaBd9H5QEhgJka LUTA== X-Forwarded-Encrypted: i=1; AJvYcCVOGaAvaYsEWIJ2ibNoaO4pAk374/BapXSkKpUs2dvxBTgWNI2VYbs4HNZGgpB9ijf0TTZWj6UCbQ==@kvack.org X-Gm-Message-State: AOJu0YwfNzzTau4viB0NkO7yDxfEWn+wAyt+1Dkp77ny5SAvDxadzKGO mb4yLWpk8Rp7WMchp38jnFNreli3IoxtvPmouPA8UZkkDY0sK/NtmzJSpD4wSXPCcWc3Zoi0CWd +1A== X-Google-Smtp-Source: AGHT+IFjjwr5ru1uWVNvi15QsRRv5yn9aK9DytE2ceD8f/6OYw8sNNdl9g7eT88QVCpaGBCf3pVchkAXDj4= X-Received: from pjyd4.prod.google.com ([2002:a17:90a:dfc4:b0:2ea:5fc2:b503]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90a:c2c7:b0:2ee:e518:c1d8 with SMTP id 98e67ed59e1d1-2f548f1c3f0mr7653251a91.30.1736389848454; Wed, 08 Jan 2025 18:30:48 -0800 (PST) Date: Wed, 8 Jan 2025 18:30:18 -0800 In-Reply-To: <20250109023025.2242447-1-surenb@google.com> Mime-Version: 1.0 References: <20250109023025.2242447-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109023025.2242447-10-surenb@google.com> Subject: [PATCH v8 09/16] mm: uninline the main body of vma_start_write() From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: B2AFD1A000E X-Stat-Signature: sze9wor6acaog5b1sbizxnxy1mze976z X-Rspam-User: X-HE-Tag: 1736389849-498014 X-HE-Meta: U2FsdGVkX1+u8puWikZgU5L++A8K8yOCUEKRyYErNAA3TW0tfCIq62i+ZzXsdlB44kMRyEmVNPVlUE2bNAY9QOEsNVXiqAyy50weUYwx6bQZP9XyTWNgRw0vsYiykSRsJ+OuYoXUhsFoN6q2Kvl7VGfyg9VvNE+HD8sxEBn4C9nFJyjtFh2OGVxlncR1DJ1Bwy7xnO6ihVNhKGpEfUlp1gipsdopSMNaIGUzIrU/oTrbUeKyTJey2hpetkzQu3aMASyjmarDNtBAMO39TTFXrk6/OWdT6CovAzfPEH4WzcbrtiB8eNY6m0QZYlfGd1x9fGdh20Pq8+CgDC2yNmgYSL304SS0LxWpM6INlJ96NDTFNiJhdDcXyQIQiCi3K9CsY0xNcd/x9MpqDvmmhQwLHhh0wDuicxsTF+n0hlyvPh+JLoQFAIzXz03ZhtyapznxhNFzkASYo0s6ts9Lm7qpWsaVHIzObNFM+8NBwhQ5z+Lh0G0Q6laIXu8I46s7z7gS0JgIyMoJjFNVIwl1IdrSa9PyuzkGrHvHVRJQO8XvdGieYnai4w1SBUGHLjIhqQVJT+F3uunNZO8ZUatzKpByTY4rfvbB1+fsdtstuaPDk1IZx6hEllm5JmvthUbUXfJ17/wyxJnKYElTItjh5yl0eN5Flh5i5qOhkILIjUHGNjWP2rDwJcyDwpfBPj198NPB7HzLkmzOepH+wS3V8NbpWUDzpcc+b1gU0Yn/6Dh6Tq3ljtqVD0mqYICWRV7DzJIF7wvFGOhgNdHhg8DvghMus6m65RVk+iC8YczrPqePpOP1Zfy/u5j5PHPmWPgConAMoxW4Q5m7F5TdWFBKzNUXGjx1QRIu+XNBFa7lo+O45HAtMpYhAtmRPue7FA3qlYdyNh92+Suye7S0eOb+81QsWktSe68hYbeNzpCbsNdXLnxSONij9D+uqJ4doVuxiNXBH0eX6XEjvUXcxEivfqX +rYgtYle tseuzq9DX7qYXLcAP2iBMKFGab3PcdC7fsbsRK0M1bhHc70JA3iXcbO+1jjNhwPAfMQPKFk+eQNJ2w2pVD9LdFGN8/2tr4ntFxJ3q4eCTZwuS3k35QA7dtNlBhEAvwSWBVrGTbnEsNO2sMug8IEKy3NhC1bhEgBr/JAL7DVRNsXGR5t5D82KZ7Rems24ELVf3DbHjcy8PK+U24cYkNTcyWEBEqlg8INecy9G9DN1DyoSpHLUVianF7ac6eWVMk/pjW3+ARRUAHT3HbKAquPyKW+GPGUqzz/ZHHmYbId7k1Fxhj3pwM7QY3Z5KQVs1TC3uW3nOHFF3rYBzx4SxZfRz+pNAZj8eFrHPyNtboVPeAo/uME3vgU4a6QVjRjvHz5Ehvz01qbnEViPkKHQshcdvyLEr+ZGfZ4SFbkolSjCwbK/5Iv25YyInt6Ri0anM4A8y1fQjg/9adGu6wRp2sOPUmqGESPv+NqD2OEMYhdvWuA2KdObj5iSefHRO8kZb7vCHuwKIlfvDtUMVlRlC74qqPVCIWF4lfhLAv39gBVCBgeeoLxrqkdXWZQSfjqjAmLvV4uXjqzECd7phK4YTdWJoN1An/3Yv9JXlWYaEC68pwEJUXCXlzCzZNfl4OYx/jja/MdXyT+K8Zs0lSRpszoAkp8uhzMC9b1McTtgN X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: vma_start_write() is used in many places and will grow in size very soon. It is not used in performance critical paths and uninlining it should limit the future code size growth. No functional changes. Signed-off-by: Suren Baghdasaryan Reviewed-by: Vlastimil Babka --- include/linux/mm.h | 12 +++--------- mm/memory.c | 14 ++++++++++++++ 2 files changed, 17 insertions(+), 9 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 6e6edfd4f3d9..bc8067de41c5 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -787,6 +787,8 @@ static bool __is_vma_write_locked(struct vm_area_struct *vma, unsigned int *mm_l return (vma->vm_lock_seq == *mm_lock_seq); } +void __vma_start_write(struct vm_area_struct *vma, unsigned int mm_lock_seq); + /* * Begin writing to a VMA. * Exclude concurrent readers under the per-VMA lock until the currently @@ -799,15 +801,7 @@ static inline void vma_start_write(struct vm_area_struct *vma) if (__is_vma_write_locked(vma, &mm_lock_seq)) return; - down_write(&vma->vm_lock.lock); - /* - * We should use WRITE_ONCE() here because we can have concurrent reads - * from the early lockless pessimistic check in vma_start_read(). - * We don't really care about the correctness of that early check, but - * we should use WRITE_ONCE() for cleanliness and to keep KCSAN happy. - */ - WRITE_ONCE(vma->vm_lock_seq, mm_lock_seq); - up_write(&vma->vm_lock.lock); + __vma_start_write(vma, mm_lock_seq); } static inline void vma_assert_write_locked(struct vm_area_struct *vma) diff --git a/mm/memory.c b/mm/memory.c index 105b99064ce5..26569a44fb5c 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -6370,6 +6370,20 @@ struct vm_area_struct *lock_mm_and_find_vma(struct mm_struct *mm, #endif #ifdef CONFIG_PER_VMA_LOCK +void __vma_start_write(struct vm_area_struct *vma, unsigned int mm_lock_seq) +{ + down_write(&vma->vm_lock.lock); + /* + * We should use WRITE_ONCE() here because we can have concurrent reads + * from the early lockless pessimistic check in vma_start_read(). + * We don't really care about the correctness of that early check, but + * we should use WRITE_ONCE() for cleanliness and to keep KCSAN happy. + */ + WRITE_ONCE(vma->vm_lock_seq, mm_lock_seq); + up_write(&vma->vm_lock.lock); +} +EXPORT_SYMBOL_GPL(__vma_start_write); + /* * Lookup and lock a VMA under RCU protection. Returned VMA is guaranteed to be * stable and not isolated. If the VMA is not found or is being modified the From patchwork Thu Jan 9 02:30:19 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13931817 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3914CE77188 for ; Thu, 9 Jan 2025 02:30:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 92EBB6B00A3; Wed, 8 Jan 2025 21:30:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8B68E6B00A4; Wed, 8 Jan 2025 21:30:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6E50E6B00A5; Wed, 8 Jan 2025 21:30:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 498006B00A3 for ; Wed, 8 Jan 2025 21:30:54 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 021A8140B57 for ; Thu, 9 Jan 2025 02:30:53 +0000 (UTC) X-FDA: 82986335628.05.AEE5EC4 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) by imf04.hostedemail.com (Postfix) with ESMTP id B227C40014 for ; Thu, 9 Jan 2025 02:30:51 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=cYkuleWy; spf=pass (imf04.hostedemail.com: domain of 32jR_ZwYKCIMz1yluinvvnsl.jvtspu14-ttr2hjr.vyn@flex--surenb.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=32jR_ZwYKCIMz1yluinvvnsl.jvtspu14-ttr2hjr.vyn@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736389851; a=rsa-sha256; cv=none; b=NPTLpnulXTRmSXKCmfHAnHzw7LnkvKFzphAV6f/Ga2OvUU+VfX7Eg9qg9HEYt4VEEX1O+8 X1D1R9nKxcpeTa79Yj3f9cdhR+m77RmwdF98+pV5JT3As4+HIO3SvsXWYyK3AWN7A1Y+Bp 1KnlxoZOVakM5klQICdVraBdAdMgyJ0= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=cYkuleWy; spf=pass (imf04.hostedemail.com: domain of 32jR_ZwYKCIMz1yluinvvnsl.jvtspu14-ttr2hjr.vyn@flex--surenb.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=32jR_ZwYKCIMz1yluinvvnsl.jvtspu14-ttr2hjr.vyn@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736389851; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=YDQWvgnM3ecpsVy1JQGtTknKWtLSYOOH2mwLaEelsWk=; b=zxxoxdnxT7qehQCPDvJuWbgelqtOkI45oxMzVO9qv26e3mMBNzxI+AirRHbou+XUr+xJ/k ZrV0jIMR4lYi2JT8urljY5fWGu015QTBnNLMVCwEzyZekLFwNV7t/DfTp8OaCeer5toQ1G 4MPRiWVS3Kpr3UW9q+oVWCCm688N6JA= Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-2166a1a5cc4so6278205ad.3 for ; Wed, 08 Jan 2025 18:30:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736389850; x=1736994650; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=YDQWvgnM3ecpsVy1JQGtTknKWtLSYOOH2mwLaEelsWk=; b=cYkuleWygveWAfS7aJpef1GpeJR+JP+gFiQEwHvZ/S1AU2PVlacimLTDBZjkp0SXdq 2wJPocdzMku3GGS2uwAPBWzRqyHui0t6TnjGBf0oP7RahpM2lcKQPj0ddvp+xuPeeQ9l 5eHdCn+N7H39YfNtQlE09L98Vy83yhcpIMNbkJs9xWaTcHafTYQXGTYuSfDQewqh9YZ2 dYnYLjqX0xQ2jODv1fq387qiEKmFXQub+ReKKWbOiMoY+6487kAz5Rfc4pv7N0ws83dd 77DuLMpn0OkSkIbfYUJvMOXE1KChRIttIG4Icp1mUudQD1c4G2SbN/g8krGot+UyGXQO AFRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736389850; x=1736994650; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=YDQWvgnM3ecpsVy1JQGtTknKWtLSYOOH2mwLaEelsWk=; b=ul5E4OBXct1FlzQuz/kbJ8PG4F1dSbVetiSBZHAPinUARTP5qViskFV4EcuZIPkvkJ HUkCpP8QyBiJeqS2McnGYY0KQD44t7hypEZSVPxSv3WO7oUXsoLPIR4GRdGN75ieg7Be 9GerWKy/3omgJDxv7pbzzhl5vxFBfeVXKgVm4MvoZKZxR++ejK2dnMIubvilRHrzp/Rd PC948qFJES8HLa9QhxpOcvgvhc2EgXGG5SJE24DkoPUBEcx6uem3WsYErRjr7vKuySDR 0eJ+AB8kKHlz6OHj+K2TvDU9CoGlqTrNS6B9OALK5deE7JegGlOibFC9jYjHkXSCuwv9 69yA== X-Forwarded-Encrypted: i=1; AJvYcCXdEAse17aKoIUIxPbruyTjOZtpKuhUsmwtscHZSXsL8jc1PPyvbgpBBiwa+XcI3GWdL+N6UlHiIQ==@kvack.org X-Gm-Message-State: AOJu0YxloJk3FmZXtZqgxDOWHqR3tJHuAeYyKQ2KvBupYKvJuchRF8Xk Q0Io6dQEv7CFMThSgTUxI4beKHhyVcL4YPq6mE5oZ35ELs4i0qRV8jUQiI/ZdeY6P0WBT/LNAJX dZw== X-Google-Smtp-Source: AGHT+IGU1GveoKiXtMIwX3Dsvx/aQojyG0Nha/qg6tCj4vgO60tNC7tTkojOSt/gTC0vj2nSfdWqvyuR7Jc= X-Received: from pfbea18.prod.google.com ([2002:a05:6a00:4c12:b0:725:dec7:dd47]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:93a1:b0:1e1:9f24:2e4c with SMTP id adf61e73a8af0-1e88cfa6b60mr8471434637.16.1736389850547; Wed, 08 Jan 2025 18:30:50 -0800 (PST) Date: Wed, 8 Jan 2025 18:30:19 -0800 In-Reply-To: <20250109023025.2242447-1-surenb@google.com> Mime-Version: 1.0 References: <20250109023025.2242447-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109023025.2242447-11-surenb@google.com> Subject: [PATCH v8 10/16] refcount: introduce __refcount_{add|inc}_not_zero_limited From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Rspamd-Queue-Id: B227C40014 X-Stat-Signature: xcamxfargbx1m37137pm76qdu66rwkem X-Rspam-User: X-Rspamd-Server: rspam09 X-HE-Tag: 1736389851-783042 X-HE-Meta: U2FsdGVkX1+LOvFcbcygg8wIBRM93kYOmLf6Wp8cMtwHSSaFZY8wjvfCQcBnkF6lj+AqnDukVjeLS15ta3DWWaM/watvZz4ESS+JjLszempcwFBLR5uvCEYNnvAZZahLUEx+m1rX01z9/5dJz13CHO5mEc9xf7U9a04Q1eyXyHSwz1Ne2E+KVeiUA/4rkOB7p+nmso4DgfvFUW8fdxZMvuQYam2gdvJV8nLAr5nCqQJwfSzgRdq7kzekBxo5+rGtMN9obHK0jXwpWIRBRPbbDP6ZWkaPKlI34UAVr557Law4wIPLkk3GzX7lxFWwoEgXBf3PXCtqHjSoV9w6qxkJA4I9H6VjbOzYFuc/8PmWYBKw5L7yodrOIrypzR8eN4GtrpHwOQ1LiWESdZpuBmPQRu7hKjAWbbACVunxmdbqs8U1XypEHjfN5F4md+cq0e470vcQ0ZhbPwjWu9rZ1Y+Y/Kb1iVQnR6Zo2VqlVxrozBpSuzOgBubSFXPrEy6VoU72QmwpIMaP3U4r1Bc/yXcCqWdZGd8qBVf6KoEXZBethLht8f2zhzpA7smFWu64gsPh8qnFEEawK/7XCjl//aoDWoEuRBijDUgEHDhNOgwU6d84/oL2yKRvpmkY4Q9eTnjABFR2AuXwzA4zKn93pvKID4OG3RUjtTVezz4BD1sbJ1T4hPebfgg4SrYx1iAbWacCXR/H75oo2qq71Cn+jN4Gl2hZ0fpwF7niL9BrfHRTU6vZB9Zss3ZVnhxJjE/RAFpIkz5m5pPmtVVSStKBsbY5Gwby97++kfLmegeQqUTAwRjYdKJnRE3rfdVkXMyjbcgoP55zdwFS2IOWJLJWOD+pniDXxuQUo8IsTvIlrEfU2yFwuj2CEIN6vBHPdzm53GYS9vnlm6Xkczqn7GixKYpSPzll4TQ8E8G55EGahDv4urjkjJoroAiEUAzQDRy0LmZBIUNo1JMMxa8iuhNr11P k56w036p ElMjKiELugBiI1TR5yd740wHy1dQvnZUv0V+NGHINCrcj/0dJX1Sj1Sn8AX1zz7eu9LA9cWpgQD9LiA1xZOT14ooHPrYLoVae36L+6fe6mCShZWF9PRO8v5u6a6lhQA6/9sAr3+h8qRQc9pIFM6mlmDj+YUYW75NunKYQaywvwNZLFa9SnIi6i9b2/gvkqprn9lH5x6AVAKOhSFaQCFRptdy6gdguA33JrFoOEEsRx3hHxNAuocxAScxssltw5RfeA5C+kxEPQnkctLFTTMy2vY5Bhmz4MCyxeMlp1apkVd/7gPs68E4B29H5yFh+Rhahjsb94jkAn13KuOK/2Co34FTZJE9tl+biOoYB4OqF7CsjXRwU72Ne2omC6a0fvAluLe5nxDqJd04mV0d5rArazqg0nIZzrvwinidaZPkoY/U1yvEz5pepLPDlP8YNAVVS516T1xTMZFEdby28XscJVVX8budJmkBBgKsrnNosXRJVmPmgjlbptugb29bdM9mWo59sf/qdeLDWIV9M2C2ubMXFHgS9SGj/mc2AlTprwsOEEoIjtRHRPcXfsOLvscX6NSS1qn0Q8CKBodOuDSVB1fbmz1TB9n5xHFUiOfNWT1cAzTM6smZSpKyqmg9cLrQqxdPi7nPipAJBje2cX+l429qHM0fsc9G2LTJ4 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Introduce functions to increase refcount but with a top limit above which they will fail to increase (the limit is inclusive). Setting the limit to INT_MAX indicates no limit. Signed-off-by: Suren Baghdasaryan Acked-by: Vlastimil Babka --- include/linux/refcount.h | 20 +++++++++++++++++++- 1 file changed, 19 insertions(+), 1 deletion(-) diff --git a/include/linux/refcount.h b/include/linux/refcount.h index 35f039ecb272..4934247848cf 100644 --- a/include/linux/refcount.h +++ b/include/linux/refcount.h @@ -137,13 +137,19 @@ static inline unsigned int refcount_read(const refcount_t *r) } static inline __must_check __signed_wrap -bool __refcount_add_not_zero(int i, refcount_t *r, int *oldp) +bool __refcount_add_not_zero_limited(int i, refcount_t *r, int *oldp, + int limit) { int old = refcount_read(r); do { if (!old) break; + if (i > limit - old) { + if (oldp) + *oldp = old; + return false; + } } while (!atomic_try_cmpxchg_relaxed(&r->refs, &old, old + i)); if (oldp) @@ -155,6 +161,12 @@ bool __refcount_add_not_zero(int i, refcount_t *r, int *oldp) return old; } +static inline __must_check __signed_wrap +bool __refcount_add_not_zero(int i, refcount_t *r, int *oldp) +{ + return __refcount_add_not_zero_limited(i, r, oldp, INT_MAX); +} + /** * refcount_add_not_zero - add a value to a refcount unless it is 0 * @i: the value to add to the refcount @@ -213,6 +225,12 @@ static inline void refcount_add(int i, refcount_t *r) __refcount_add(i, r, NULL); } +static inline __must_check bool __refcount_inc_not_zero_limited(refcount_t *r, + int *oldp, int limit) +{ + return __refcount_add_not_zero_limited(1, r, oldp, limit); +} + static inline __must_check bool __refcount_inc_not_zero(refcount_t *r, int *oldp) { return __refcount_add_not_zero(1, r, oldp); From patchwork Thu Jan 9 02:30:20 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13931818 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1BD0EE7719A for ; Thu, 9 Jan 2025 02:31:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A1B6C6B00A4; Wed, 8 Jan 2025 21:30:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 97BAD6B00A5; Wed, 8 Jan 2025 21:30:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 70AD96B00A6; Wed, 8 Jan 2025 21:30:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 465236B00A4 for ; Wed, 8 Jan 2025 21:30:56 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id E841CB0BB0 for ; Thu, 9 Jan 2025 02:30:55 +0000 (UTC) X-FDA: 82986335670.19.74718D9 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) by imf19.hostedemail.com (Postfix) with ESMTP id 056AC1A000E for ; Thu, 9 Jan 2025 02:30:53 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=VNTozOnf; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf19.hostedemail.com: domain of 33DR_ZwYKCIU130nwkpxxpun.lxvurw36-vvt4jlt.x0p@flex--surenb.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=33DR_ZwYKCIU130nwkpxxpun.lxvurw36-vvt4jlt.x0p@flex--surenb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736389854; a=rsa-sha256; cv=none; b=GPoeGiC5MWxAeQEGSfK7MAYr8uo3XBywiIns6Yq4Vj22rXtQ046z9v9P6R124vv8G5lWar 7iSisuHkK5bz1YiARGZnrbu0beYwgdYw02n+O6MIMiQczXtKgw+01ePofe2mvyRI/ludyN pGHp7sm/iSkK2CEFwkABsJPbovvTZ18= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=VNTozOnf; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf19.hostedemail.com: domain of 33DR_ZwYKCIU130nwkpxxpun.lxvurw36-vvt4jlt.x0p@flex--surenb.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=33DR_ZwYKCIU130nwkpxxpun.lxvurw36-vvt4jlt.x0p@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736389854; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=HaODUQCc7/mWazYIT3e6gMVgSjynxsNjJAACmHmT7Io=; b=hVvJEU+c+CHl+ej0f9ad6yEYr19dxn1B3BlAc1u63BaPiScOHZ4lWSsq+DRqZuONKDjHnQ zXvvfRmLDtrx0UdfF26In1oDkmVWsqvQ3UjPEE2GzOIB9Nx4O5qjOxXEh79gN8OH7P4b7n JukFyuQbFC7EUtJ+j5GjMSeJ8OT40gI= Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2ee46799961so1167591a91.2 for ; Wed, 08 Jan 2025 18:30:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736389853; x=1736994653; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=HaODUQCc7/mWazYIT3e6gMVgSjynxsNjJAACmHmT7Io=; b=VNTozOnfVlweanpklv9b7Qx8YgmecdNAcFyWnIgBKu6Ct+Ua3haEPJ7/OS9Sdhcp3t 1SgG+KD2CSY9/2J4oU2Icf4L3zZIYBHLOCiaOhJzXXaevKHNg+VHcjn9/cc8bMPLTP/v 3HxQep5hqC5NLG7AkkaD5aodkips8OiZcyclweeeyde5Z6YLn+voBDr8EhQ+0MmuXPwG wJ8Q5pxLB/UTdSRQc44bVl+dTiv+A/6b/ZtgA5IQQIyFGC0uqPk6loFJZqgdmQAlx9Uz sdZKAOvO+fLrg+IBVQBvEq3ulNHABL8hXOiGpUsvM/TX/heX2/R4TfnA+DoIBml6CCim ajEg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736389853; x=1736994653; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=HaODUQCc7/mWazYIT3e6gMVgSjynxsNjJAACmHmT7Io=; b=g/zwVdk4dq+60vjMsjAR7P0FLofx0runikenKV3kvbwfJL+WnbjtBvJFnrtcHR3BbY /pTrawVFiTDm4FS41Apt3ezhaC0mwFviKhWwCmrAr9Tqh5nmPlNwxZU/cnA9urIlgdZt WnBpZXYXrUeLDdWdi6cGyAeawSi74J2Ms/bxtC5NZ4GPFkUHPTs9pZusio+RjF5mdLeE tCvpK7oRUu3yi0DGJ2gTq7KshegOxCGXgqeSiHKXUxRqwIIa5Ee4Um14nFLSn6IAf72m VTMoTcz82z5BJerYzrCjwY8CvgdOvY6+K2BWoVYNMocePLmhZkthyeyJMPZssc0BwVPL m4dw== X-Forwarded-Encrypted: i=1; AJvYcCULrld4K2pxY71b2R2DBJIda0u0CCpTBPw3tWL1zsAf1fY0C00XwIARyuB3/yyQM4W/luHqB3F1SA==@kvack.org X-Gm-Message-State: AOJu0Yyw7NEGSa/cxL1s0Pq3Z4jJ+v8zSsKgO40Piaw1SKow1y2o1/pA tsmefxq7ymF/sTf6Iw3wu3vIHPmHf6NWDQqLICwRCgJy4UsIq3HLo77oj4n+tIVutZxraWHYryr uZw== X-Google-Smtp-Source: AGHT+IFZKbhh22botJPappbTA+wni8aXzF/beHPBPfJIPvsk7COYCNyyhNJYxZVOVyc2GrTSSI0vGz2zhnQ= X-Received: from pjb12.prod.google.com ([2002:a17:90b:2f0c:b0:2ea:5469:76c2]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:6c3:b0:2ee:c918:cd42 with SMTP id 98e67ed59e1d1-2f548ecf156mr6597632a91.22.1736389852818; Wed, 08 Jan 2025 18:30:52 -0800 (PST) Date: Wed, 8 Jan 2025 18:30:20 -0800 In-Reply-To: <20250109023025.2242447-1-surenb@google.com> Mime-Version: 1.0 References: <20250109023025.2242447-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109023025.2242447-12-surenb@google.com> Subject: [PATCH v8 11/16] mm: replace vm_lock and detached flag with a reference count From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Stat-Signature: ptdb6kr3q6zfexd5gujmmsupry8iotot X-Rspam-User: X-Rspamd-Queue-Id: 056AC1A000E X-Rspamd-Server: rspam08 X-HE-Tag: 1736389853-486611 X-HE-Meta: U2FsdGVkX19m5rKe0q0+QnIN8eUAxaFU8e4OfBz8NgqLhs3TdfHZlIz7vuRpYIKXf14IFV2qIUZNGwE650hwPxTiaynvJ1h4qxV62U+Qxvqu8jxsDy+KkXx+vzKmKfu8HX/no73F4G0yCln5ZxfVKr07ByObzmQMdDCwCfndB1pPBwnJL4+wrWlXyPLKlf8qP2QBz9akNTC4ZWKKpI0FXGFvALDO4RENi9Knkv0AIRhT8zL58IcV7coUrcc//EGCcqlekq0QnuS87UlU2mEY97W/8v8GYJfB8j0BkN7nvcL4b4be+89W76Z7KparjBVa8MYZWlV8qJR/IoeCKZaRmh+SwupQFZ67XlUsJAPnJZjDDOYKV+/5chm4HGIItg/LH+z3iI/6rjPVWpXAPuUIms7D3waGMimQSHBb5tdq3L7Tqm4cqzEJVztpLCxU+WuNpN4QsAbgXjOhkBkJAGZvylWf/q+K8vKBvD5zHz518ADeHJiYJXJsBi/NmwAzLUmuwmUToUYnBbLC3Rnxdpq0TzUpAy9jlqcxL+jMZntkUbpsZ17E25EfLJfezdFWeADjrULxW/Q5A+WV+IwuAwXdMVxF4/KtYi46YOmHieui2aNbgxggqDctfLy1w1CUgoNOBrApSp3YxTJr5OD5g2KprfXgdx6QExNZhti1gbnRbg6y9s/mX31b42KBGssJxc6AK7Zl/upWD7g/7huqWm0uYDSEHXWXK+66xCtrtFOLxQbyxJ5y5uIBzWPWUsCvuvXfzm3g11TGLxKlSFFMpRXfYzbz7i37g+8fIa8+R/s9w0Z3usw7Ub3QLvhkrNwDPuwfas4A3ilBmErESHVPf+1s+IyrTC4LyHvnMF7xy9J0Db0TuzrFgK1W3yMkog4X8J96I37eQ9dVBJL9ZxV/+hruArkbhhQ2fE7d2h1Ts1ceIkJySB9BI0sYg38etdthoLQiAtXxWx4okrclhbhbi1c qc9kDyyL xsNqeFk29OFJGvAc4OZSd1SNTK8zqSD4CvAAhOj9kfjIh61+s2fBRWRCGy1X9UJ1lyuymrWRARs/25k+fuRRdSviFi/1qc+jn7MnFWRE1t93Gpq8VmdA2ahGjTm1UKiqV6/Hxf7smNZm+6eJ9pIYJSsYFqqZPjgxwPkMHrg2du4gIt4GQXUEKVcmqpasJ+L3xDvDIYMd1EPWwdiTruVlMevU5uqonWC8w4+W2D2X+1pO12ezX6AXHmt5IdeaZ8x9IaWblB19mDvIKM+zgEDvKddHTNAGJQFAuzUSCPmS36ROccHNJ9S8e5eYeRenwK4RTz+KsEeH7L4wQzQMvRxrpAHDUscWyY4uSvoLJnDGqTW7XUDGlIRufhmoyBmskCkoz33hwcQvzfZ8rB/uhigaz7WsogpDMfI3lqiTSagf+pSfq+T54fR9N7DC/+HDzFWpNJoiQCId02fxlKM9jEHGjvoeLBE8O58tur5kR/BDtAOWDQOh+q1sx73L1GsRcx+K1C67YTkOoSA15dObZZQvnnmzjq00niNl0e2mKAXVQmhOIsgPRRzs/ifqi942pgd9PeAEFyGT3IanbOLFSKmIQ3U6PQghFStYPqg5t6rjznG1rgzvKSTewET1KASFKWtcrQ6ERzAYMUSKv/K/0hX994y7GKIhI81cpoQETJiG+eg2GTxzow/PSAXMKGiQz17CjQsbeWlP7vxaww+FbIGicw7rbcQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: rw_semaphore is a sizable structure of 40 bytes and consumes considerable space for each vm_area_struct. However vma_lock has two important specifics which can be used to replace rw_semaphore with a simpler structure: 1. Readers never wait. They try to take the vma_lock and fall back to mmap_lock if that fails. 2. Only one writer at a time will ever try to write-lock a vma_lock because writers first take mmap_lock in write mode. Because of these requirements, full rw_semaphore functionality is not needed and we can replace rw_semaphore and the vma->detached flag with a refcount (vm_refcnt). When vma is in detached state, vm_refcnt is 0 and only a call to vma_mark_attached() can take it out of this state. Note that unlike before, now we enforce both vma_mark_attached() and vma_mark_detached() to be done only after vma has been write-locked. vma_mark_attached() changes vm_refcnt to 1 to indicate that it has been attached to the vma tree. When a reader takes read lock, it increments vm_refcnt, unless the top usable bit of vm_refcnt (0x40000000) is set, indicating presence of a writer. When writer takes write lock, it sets the top usable bit to indicate its presence. If there are readers, writer will wait using newly introduced mm->vma_writer_wait. Since all writers take mmap_lock in write mode first, there can be only one writer at a time. The last reader to release the lock will signal the writer to wake up. refcount might overflow if there are many competing readers, in which case read-locking will fail. Readers are expected to handle such failures. In summary: 1. all readers increment the vm_refcnt; 2. writer sets top usable (writer) bit of vm_refcnt; 3. readers cannot increment the vm_refcnt if the writer bit is set; 4. in the presence of readers, writer must wait for the vm_refcnt to drop to 1 (ignoring the writer bit), indicating an attached vma with no readers; 5. vm_refcnt overflow is handled by the readers. Suggested-by: Peter Zijlstra Suggested-by: Matthew Wilcox Signed-off-by: Suren Baghdasaryan --- include/linux/mm.h | 98 ++++++++++++++++++++++---------- include/linux/mm_types.h | 22 ++++--- kernel/fork.c | 13 ++--- mm/init-mm.c | 1 + mm/memory.c | 77 +++++++++++++++++++++---- tools/testing/vma/linux/atomic.h | 5 ++ tools/testing/vma/vma_internal.h | 66 +++++++++++---------- 7 files changed, 193 insertions(+), 89 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index bc8067de41c5..ec7c064792ff 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -32,6 +32,7 @@ #include #include #include +#include struct mempolicy; struct anon_vma; @@ -697,12 +698,41 @@ static inline void vma_numab_state_free(struct vm_area_struct *vma) {} #endif /* CONFIG_NUMA_BALANCING */ #ifdef CONFIG_PER_VMA_LOCK -static inline void vma_lock_init(struct vm_area_struct *vma) +static inline void vma_lock_init(struct vm_area_struct *vma, bool reset_refcnt) { - init_rwsem(&vma->vm_lock.lock); +#ifdef CONFIG_DEBUG_LOCK_ALLOC + static struct lock_class_key lockdep_key; + + lockdep_init_map(&vma->vmlock_dep_map, "vm_lock", &lockdep_key, 0); +#endif + if (reset_refcnt) + refcount_set(&vma->vm_refcnt, 0); vma->vm_lock_seq = UINT_MAX; } +static inline bool is_vma_writer_only(int refcnt) +{ + /* + * With a writer and no readers, refcnt is VMA_LOCK_OFFSET if the vma + * is detached and (VMA_LOCK_OFFSET + 1) if it is attached. Waiting on + * a detached vma happens only in vma_mark_detached() and is a rare + * case, therefore most of the time there will be no unnecessary wakeup. + */ + return refcnt & VMA_LOCK_OFFSET && refcnt <= VMA_LOCK_OFFSET + 1; +} + +static inline void vma_refcount_put(struct vm_area_struct *vma) +{ + int oldcnt; + + if (!__refcount_dec_and_test(&vma->vm_refcnt, &oldcnt)) { + rwsem_release(&vma->vmlock_dep_map, _RET_IP_); + + if (is_vma_writer_only(oldcnt - 1)) + rcuwait_wake_up(&vma->vm_mm->vma_writer_wait); + } +} + /* * Try to read-lock a vma. The function is allowed to occasionally yield false * locked result to avoid performance overhead, in which case we fall back to @@ -710,6 +740,8 @@ static inline void vma_lock_init(struct vm_area_struct *vma) */ static inline bool vma_start_read(struct vm_area_struct *vma) { + int oldcnt; + /* * Check before locking. A race might cause false locked result. * We can use READ_ONCE() for the mm_lock_seq here, and don't need @@ -720,13 +752,19 @@ static inline bool vma_start_read(struct vm_area_struct *vma) if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(vma->vm_mm->mm_lock_seq.sequence)) return false; - if (unlikely(down_read_trylock(&vma->vm_lock.lock) == 0)) + /* + * If VMA_LOCK_OFFSET is set, __refcount_inc_not_zero_limited() will fail + * because VMA_REF_LIMIT is less than VMA_LOCK_OFFSET. + */ + if (unlikely(!__refcount_inc_not_zero_limited(&vma->vm_refcnt, &oldcnt, + VMA_REF_LIMIT))) return false; + rwsem_acquire_read(&vma->vmlock_dep_map, 0, 1, _RET_IP_); /* - * Overflow might produce false locked result. + * Overflow of vm_lock_seq/mm_lock_seq might produce false locked result. * False unlocked result is impossible because we modify and check - * vma->vm_lock_seq under vma->vm_lock protection and mm->mm_lock_seq + * vma->vm_lock_seq under vma->vm_refcnt protection and mm->mm_lock_seq * modification invalidates all existing locks. * * We must use ACQUIRE semantics for the mm_lock_seq so that if we are @@ -735,9 +773,10 @@ static inline bool vma_start_read(struct vm_area_struct *vma) * This pairs with RELEASE semantics in vma_end_write_all(). */ if (unlikely(vma->vm_lock_seq == raw_read_seqcount(&vma->vm_mm->mm_lock_seq))) { - up_read(&vma->vm_lock.lock); + vma_refcount_put(vma); return false; } + return true; } @@ -749,8 +788,14 @@ static inline bool vma_start_read(struct vm_area_struct *vma) */ static inline bool vma_start_read_locked_nested(struct vm_area_struct *vma, int subclass) { + int oldcnt; + mmap_assert_locked(vma->vm_mm); - down_read_nested(&vma->vm_lock.lock, subclass); + if (unlikely(!__refcount_inc_not_zero_limited(&vma->vm_refcnt, &oldcnt, + VMA_REF_LIMIT))) + return false; + + rwsem_acquire_read(&vma->vmlock_dep_map, 0, 1, _RET_IP_); return true; } @@ -762,15 +807,13 @@ static inline bool vma_start_read_locked_nested(struct vm_area_struct *vma, int */ static inline bool vma_start_read_locked(struct vm_area_struct *vma) { - mmap_assert_locked(vma->vm_mm); - down_read(&vma->vm_lock.lock); - return true; + return vma_start_read_locked_nested(vma, 0); } static inline void vma_end_read(struct vm_area_struct *vma) { rcu_read_lock(); /* keeps vma alive till the end of up_read */ - up_read(&vma->vm_lock.lock); + vma_refcount_put(vma); rcu_read_unlock(); } @@ -813,36 +856,33 @@ static inline void vma_assert_write_locked(struct vm_area_struct *vma) static inline void vma_assert_locked(struct vm_area_struct *vma) { - if (!rwsem_is_locked(&vma->vm_lock.lock)) + if (refcount_read(&vma->vm_refcnt) <= 1) vma_assert_write_locked(vma); } +/* + * WARNING: to avoid racing with vma_mark_attached()/vma_mark_detached(), these + * assertions should be made either under mmap_write_lock or when the object + * has been isolated under mmap_write_lock, ensuring no competing writers. + */ static inline void vma_assert_attached(struct vm_area_struct *vma) { - VM_BUG_ON_VMA(vma->detached, vma); + VM_BUG_ON_VMA(!refcount_read(&vma->vm_refcnt), vma); } static inline void vma_assert_detached(struct vm_area_struct *vma) { - VM_BUG_ON_VMA(!vma->detached, vma); + VM_BUG_ON_VMA(refcount_read(&vma->vm_refcnt), vma); } static inline void vma_mark_attached(struct vm_area_struct *vma) { - vma->detached = false; -} - -static inline void vma_mark_detached(struct vm_area_struct *vma) -{ - /* When detaching vma should be write-locked */ vma_assert_write_locked(vma); - vma->detached = true; + vma_assert_detached(vma); + refcount_set(&vma->vm_refcnt, 1); } -static inline bool is_vma_detached(struct vm_area_struct *vma) -{ - return vma->detached; -} +void vma_mark_detached(struct vm_area_struct *vma); static inline void release_fault_lock(struct vm_fault *vmf) { @@ -865,7 +905,7 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, #else /* CONFIG_PER_VMA_LOCK */ -static inline void vma_lock_init(struct vm_area_struct *vma) {} +static inline void vma_lock_init(struct vm_area_struct *vma, bool reset_refcnt) {} static inline bool vma_start_read(struct vm_area_struct *vma) { return false; } static inline void vma_end_read(struct vm_area_struct *vma) {} @@ -908,12 +948,8 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) vma->vm_mm = mm; vma->vm_ops = &vma_dummy_vm_ops; INIT_LIST_HEAD(&vma->anon_vma_chain); -#ifdef CONFIG_PER_VMA_LOCK - /* vma is not locked, can't use vma_mark_detached() */ - vma->detached = true; -#endif vma_numab_state_init(vma); - vma_lock_init(vma); + vma_lock_init(vma, false); } /* Use when VMA is not part of the VMA tree and needs no locking */ diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 0ca63dee1902..2d83d79d1899 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -19,6 +19,7 @@ #include #include #include +#include #include @@ -637,9 +638,8 @@ static inline struct anon_vma_name *anon_vma_name_alloc(const char *name) } #endif -struct vma_lock { - struct rw_semaphore lock; -}; +#define VMA_LOCK_OFFSET 0x40000000 +#define VMA_REF_LIMIT (VMA_LOCK_OFFSET - 1) struct vma_numab_state { /* @@ -717,19 +717,13 @@ struct vm_area_struct { }; #ifdef CONFIG_PER_VMA_LOCK - /* - * Flag to indicate areas detached from the mm->mm_mt tree. - * Unstable RCU readers are allowed to read this. - */ - bool detached; - /* * Can only be written (using WRITE_ONCE()) while holding both: * - mmap_lock (in write mode) - * - vm_lock->lock (in write mode) + * - vm_refcnt bit at VMA_LOCK_OFFSET is set * Can be read reliably while holding one of: * - mmap_lock (in read or write mode) - * - vm_lock->lock (in read or write mode) + * - vm_refcnt bit at VMA_LOCK_OFFSET is set or vm_refcnt > 1 * Can be read unreliably (using READ_ONCE()) for pessimistic bailout * while holding nothing (except RCU to keep the VMA struct allocated). * @@ -792,7 +786,10 @@ struct vm_area_struct { struct vm_userfaultfd_ctx vm_userfaultfd_ctx; #ifdef CONFIG_PER_VMA_LOCK /* Unstable RCU readers are allowed to read this. */ - struct vma_lock vm_lock ____cacheline_aligned_in_smp; + refcount_t vm_refcnt ____cacheline_aligned_in_smp; +#ifdef CONFIG_DEBUG_LOCK_ALLOC + struct lockdep_map vmlock_dep_map; +#endif #endif } __randomize_layout; @@ -927,6 +924,7 @@ struct mm_struct { * by mmlist_lock */ #ifdef CONFIG_PER_VMA_LOCK + struct rcuwait vma_writer_wait; /* * This field has lock-like semantics, meaning it is sometimes * accessed with ACQUIRE/RELEASE semantics. diff --git a/kernel/fork.c b/kernel/fork.c index d4c75428ccaf..9d9275783cf8 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -463,12 +463,8 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) * will be reinitialized. */ data_race(memcpy(new, orig, sizeof(*new))); - vma_lock_init(new); + vma_lock_init(new, true); INIT_LIST_HEAD(&new->anon_vma_chain); -#ifdef CONFIG_PER_VMA_LOCK - /* vma is not locked, can't use vma_mark_detached() */ - new->detached = true; -#endif vma_numab_state_init(new); dup_anon_vma_name(orig, new); @@ -477,6 +473,8 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) void __vm_area_free(struct vm_area_struct *vma) { + /* The vma should be detached while being destroyed. */ + vma_assert_detached(vma); vma_numab_state_free(vma); free_anon_vma_name(vma); kmem_cache_free(vm_area_cachep, vma); @@ -488,8 +486,6 @@ static void vm_area_free_rcu_cb(struct rcu_head *head) struct vm_area_struct *vma = container_of(head, struct vm_area_struct, vm_rcu); - /* The vma should not be locked while being destroyed. */ - VM_BUG_ON_VMA(rwsem_is_locked(&vma->vm_lock.lock), vma); __vm_area_free(vma); } #endif @@ -1223,6 +1219,9 @@ static inline void mmap_init_lock(struct mm_struct *mm) { init_rwsem(&mm->mmap_lock); mm_lock_seqcount_init(mm); +#ifdef CONFIG_PER_VMA_LOCK + rcuwait_init(&mm->vma_writer_wait); +#endif } static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p, diff --git a/mm/init-mm.c b/mm/init-mm.c index 6af3ad675930..4600e7605cab 100644 --- a/mm/init-mm.c +++ b/mm/init-mm.c @@ -40,6 +40,7 @@ struct mm_struct init_mm = { .arg_lock = __SPIN_LOCK_UNLOCKED(init_mm.arg_lock), .mmlist = LIST_HEAD_INIT(init_mm.mmlist), #ifdef CONFIG_PER_VMA_LOCK + .vma_writer_wait = __RCUWAIT_INITIALIZER(init_mm.vma_writer_wait), .mm_lock_seq = SEQCNT_ZERO(init_mm.mm_lock_seq), #endif .user_ns = &init_user_ns, diff --git a/mm/memory.c b/mm/memory.c index 26569a44fb5c..fe1b47c34052 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -6370,9 +6370,41 @@ struct vm_area_struct *lock_mm_and_find_vma(struct mm_struct *mm, #endif #ifdef CONFIG_PER_VMA_LOCK +static inline bool __vma_enter_locked(struct vm_area_struct *vma, unsigned int tgt_refcnt) +{ + /* + * If vma is detached then only vma_mark_attached() can raise the + * vm_refcnt. mmap_write_lock prevents racing with vma_mark_attached(). + */ + if (!refcount_add_not_zero(VMA_LOCK_OFFSET, &vma->vm_refcnt)) + return false; + + rwsem_acquire(&vma->vmlock_dep_map, 0, 0, _RET_IP_); + rcuwait_wait_event(&vma->vm_mm->vma_writer_wait, + refcount_read(&vma->vm_refcnt) == tgt_refcnt, + TASK_UNINTERRUPTIBLE); + lock_acquired(&vma->vmlock_dep_map, _RET_IP_); + + return true; +} + +static inline void __vma_exit_locked(struct vm_area_struct *vma, bool *detached) +{ + *detached = refcount_sub_and_test(VMA_LOCK_OFFSET, &vma->vm_refcnt); + rwsem_release(&vma->vmlock_dep_map, _RET_IP_); +} + void __vma_start_write(struct vm_area_struct *vma, unsigned int mm_lock_seq) { - down_write(&vma->vm_lock.lock); + bool locked; + + /* + * __vma_enter_locked() returns false immediately if the vma is not + * attached, otherwise it waits until refcnt is (VMA_LOCK_OFFSET + 1) + * indicating that vma is attached with no readers. + */ + locked = __vma_enter_locked(vma, VMA_LOCK_OFFSET + 1); + /* * We should use WRITE_ONCE() here because we can have concurrent reads * from the early lockless pessimistic check in vma_start_read(). @@ -6380,10 +6412,43 @@ void __vma_start_write(struct vm_area_struct *vma, unsigned int mm_lock_seq) * we should use WRITE_ONCE() for cleanliness and to keep KCSAN happy. */ WRITE_ONCE(vma->vm_lock_seq, mm_lock_seq); - up_write(&vma->vm_lock.lock); + + if (locked) { + bool detached; + + __vma_exit_locked(vma, &detached); + VM_BUG_ON_VMA(detached, vma); /* vma should remain attached */ + } } EXPORT_SYMBOL_GPL(__vma_start_write); +void vma_mark_detached(struct vm_area_struct *vma) +{ + vma_assert_write_locked(vma); + vma_assert_attached(vma); + + /* + * We are the only writer, so no need to use vma_refcount_put(). + * The condition below is unlikely because the vma has been already + * write-locked and readers can increment vm_refcnt only temporarily + * before they check vm_lock_seq, realize the vma is locked and drop + * back the vm_refcnt. That is a narrow window for observing a raised + * vm_refcnt. + */ + if (unlikely(!refcount_dec_and_test(&vma->vm_refcnt))) { + /* + * Wait until refcnt is VMA_LOCK_OFFSET => detached with no + * readers. + */ + if (__vma_enter_locked(vma, VMA_LOCK_OFFSET)) { + bool detached; + + __vma_exit_locked(vma, &detached); + VM_BUG_ON_VMA(!detached, vma); + } + } +} + /* * Lookup and lock a VMA under RCU protection. Returned VMA is guaranteed to be * stable and not isolated. If the VMA is not found or is being modified the @@ -6396,7 +6461,6 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, struct vm_area_struct *vma; rcu_read_lock(); -retry: vma = mas_walk(&mas); if (!vma) goto inval; @@ -6404,13 +6468,6 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, if (!vma_start_read(vma)) goto inval; - /* Check if the VMA got isolated after we found it */ - if (is_vma_detached(vma)) { - vma_end_read(vma); - count_vm_vma_lock_event(VMA_LOCK_MISS); - /* The area was replaced with another one */ - goto retry; - } /* * At this point, we have a stable reference to a VMA: The VMA is * locked and we know it hasn't already been isolated. diff --git a/tools/testing/vma/linux/atomic.h b/tools/testing/vma/linux/atomic.h index 3e1b6adc027b..788c597c4fde 100644 --- a/tools/testing/vma/linux/atomic.h +++ b/tools/testing/vma/linux/atomic.h @@ -9,4 +9,9 @@ #define atomic_set(x, y) uatomic_set(x, y) #define U8_MAX UCHAR_MAX +#ifndef atomic_cmpxchg_relaxed +#define atomic_cmpxchg_relaxed uatomic_cmpxchg +#define atomic_cmpxchg_release uatomic_cmpxchg +#endif /* atomic_cmpxchg_relaxed */ + #endif /* _LINUX_ATOMIC_H */ diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_internal.h index 47c8b03ffbbd..2ce032943861 100644 --- a/tools/testing/vma/vma_internal.h +++ b/tools/testing/vma/vma_internal.h @@ -25,7 +25,7 @@ #include #include #include -#include +#include extern unsigned long stack_guard_gap; #ifdef CONFIG_MMU @@ -134,10 +134,6 @@ typedef __bitwise unsigned int vm_fault_t; */ #define pr_warn_once pr_err -typedef struct refcount_struct { - atomic_t refs; -} refcount_t; - struct kref { refcount_t refcount; }; @@ -232,15 +228,12 @@ struct mm_struct { unsigned long flags; /* Must use atomic bitops to access */ }; -struct vma_lock { - struct rw_semaphore lock; -}; - - struct file { struct address_space *f_mapping; }; +#define VMA_LOCK_OFFSET 0x40000000 + struct vm_area_struct { /* The first cache line has the info for VMA tree walking. */ @@ -268,16 +261,13 @@ struct vm_area_struct { }; #ifdef CONFIG_PER_VMA_LOCK - /* Flag to indicate areas detached from the mm->mm_mt tree */ - bool detached; - /* * Can only be written (using WRITE_ONCE()) while holding both: * - mmap_lock (in write mode) - * - vm_lock.lock (in write mode) + * - vm_refcnt bit at VMA_LOCK_OFFSET is set * Can be read reliably while holding one of: * - mmap_lock (in read or write mode) - * - vm_lock.lock (in read or write mode) + * - vm_refcnt bit at VMA_LOCK_OFFSET is set or vm_refcnt > 1 * Can be read unreliably (using READ_ONCE()) for pessimistic bailout * while holding nothing (except RCU to keep the VMA struct allocated). * @@ -286,7 +276,6 @@ struct vm_area_struct { * slowpath. */ unsigned int vm_lock_seq; - struct vma_lock vm_lock; #endif /* @@ -339,6 +328,10 @@ struct vm_area_struct { struct vma_numab_state *numab_state; /* NUMA Balancing state */ #endif struct vm_userfaultfd_ctx vm_userfaultfd_ctx; +#ifdef CONFIG_PER_VMA_LOCK + /* Unstable RCU readers are allowed to read this. */ + refcount_t vm_refcnt; +#endif } __randomize_layout; struct vm_fault {}; @@ -463,23 +456,41 @@ static inline struct vm_area_struct *vma_next(struct vma_iterator *vmi) return mas_find(&vmi->mas, ULONG_MAX); } -static inline void vma_lock_init(struct vm_area_struct *vma) +/* + * WARNING: to avoid racing with vma_mark_attached()/vma_mark_detached(), these + * assertions should be made either under mmap_write_lock or when the object + * has been isolated under mmap_write_lock, ensuring no competing writers. + */ +static inline void vma_assert_attached(struct vm_area_struct *vma) { - init_rwsem(&vma->vm_lock.lock); - vma->vm_lock_seq = UINT_MAX; + VM_BUG_ON_VMA(!refcount_read(&vma->vm_refcnt), vma); } -static inline void vma_mark_attached(struct vm_area_struct *vma) +static inline void vma_assert_detached(struct vm_area_struct *vma) { - vma->detached = false; + VM_BUG_ON_VMA(refcount_read(&vma->vm_refcnt), vma); } static inline void vma_assert_write_locked(struct vm_area_struct *); +static inline void vma_mark_attached(struct vm_area_struct *vma) +{ + vma_assert_write_locked(vma); + vma_assert_detached(vma); + refcount_set(&vma->vm_refcnt, 1); +} + static inline void vma_mark_detached(struct vm_area_struct *vma) { - /* When detaching vma should be write-locked */ vma_assert_write_locked(vma); - vma->detached = true; + vma_assert_attached(vma); + + /* We are the only writer, so no need to use vma_refcount_put(). */ + if (unlikely(!refcount_dec_and_test(&vma->vm_refcnt))) { + /* + * Reader must have temporarily raised vm_refcnt but it will + * drop it without using the vma since vma is write-locked. + */ + } } extern const struct vm_operations_struct vma_dummy_vm_ops; @@ -492,9 +503,7 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) vma->vm_mm = mm; vma->vm_ops = &vma_dummy_vm_ops; INIT_LIST_HEAD(&vma->anon_vma_chain); - /* vma is not locked, can't use vma_mark_detached() */ - vma->detached = true; - vma_lock_init(vma); + vma->vm_lock_seq = UINT_MAX; } static inline struct vm_area_struct *vm_area_alloc(struct mm_struct *mm) @@ -517,10 +526,9 @@ static inline struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) return NULL; memcpy(new, orig, sizeof(*new)); - vma_lock_init(new); + refcount_set(&new->vm_refcnt, 0); + new->vm_lock_seq = UINT_MAX; INIT_LIST_HEAD(&new->anon_vma_chain); - /* vma is not locked, can't use vma_mark_detached() */ - new->detached = true; return new; } From patchwork Thu Jan 9 02:30:21 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13931819 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B520E77199 for ; Thu, 9 Jan 2025 02:31:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 931B86B007B; Wed, 8 Jan 2025 21:30:58 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8B9EA6B0083; Wed, 8 Jan 2025 21:30:58 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7155A6B00A6; Wed, 8 Jan 2025 21:30:58 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 4232C6B007B for ; Wed, 8 Jan 2025 21:30:58 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 04B721C81F6 for ; Thu, 9 Jan 2025 02:30:57 +0000 (UTC) X-FDA: 82986335796.20.A1121B8 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) by imf01.hostedemail.com (Postfix) with ESMTP id 3DA1740012 for ; Thu, 9 Jan 2025 02:30:56 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=FWwwYmse; spf=pass (imf01.hostedemail.com: domain of 33zR_ZwYKCIg463qzns00sxq.o0yxuz69-yyw7mow.03s@flex--surenb.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=33zR_ZwYKCIg463qzns00sxq.o0yxuz69-yyw7mow.03s@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736389856; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=dhc7tvmwjQaP4b5E/hXUfzy8UCkIkS3jsDCXbxBg4ZU=; b=yIeT5ZC4WlfBgrXdeX2hJPUAXOfUwpyeYUtTDLeEeJunuATcNavU98xYWeqMaWpjo4xwx/ wgynQkitG89Z3E4s8pSMWjI61HrWTFXBC9UNQf0s3PychtaImgfCjc3x7oIKjfetn2X85E iXMk4Mzd/GbAK3ap51g02dqZQD7cRLQ= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=FWwwYmse; spf=pass (imf01.hostedemail.com: domain of 33zR_ZwYKCIg463qzns00sxq.o0yxuz69-yyw7mow.03s@flex--surenb.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=33zR_ZwYKCIg463qzns00sxq.o0yxuz69-yyw7mow.03s@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736389856; a=rsa-sha256; cv=none; b=APT9zxJWYGN07Cj3i00a1cSboO9STOrmhT/mkm37jn4/a+J1zBh9I7TEpGm4HO3ZRBrKjr Xhb8bKk+2NNri8Y42csAZfMAsEgkZwDN6Z1jLDyyULy3WHgnM4TDkqZnqFLQwXb85LSaA4 n9/7CUhUMzifWcXmBmrZBDm2P7XzW4Q= Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-21681a2c0d5so7320095ad.2 for ; Wed, 08 Jan 2025 18:30:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736389855; x=1736994655; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=dhc7tvmwjQaP4b5E/hXUfzy8UCkIkS3jsDCXbxBg4ZU=; b=FWwwYmse5hn8Pv8HLsMswebu6iBJ2AQRgF6x72MRSr7HHCwzTv+wTTueV+hincLNrv VafSbwqWhHhUE6BdvCKVLI34INICG4NOdcXeh5zyjXtHEYU1AMd54dJkkBQPXhpVgMRS lF30N03sdw1uvergtJrUmHm2sO/SqDIJ5/fIWnv5+8zTOPox3qZF5jRF04cwheU9QWWd T2cyxGhjo98NTaMiY0PwtJA/ZtGPOL0Q7fg3ptDUVIUiBrD1mN/7j84MpydAeZWMT63j /yQmn5Opf7WPUjEGLKcEGOwP8LWR9w5uj7dC1Y7axIYiOW4zKLibjRUIHou1+qxKEolD Brnw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736389855; x=1736994655; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=dhc7tvmwjQaP4b5E/hXUfzy8UCkIkS3jsDCXbxBg4ZU=; b=GUnahX1Vjk22JmDoBJrYd+FV8hf7luGCIKQWqr3sWcG0qMneksU8Xm08/a618yOk0R MUlwK1dQ06ZlAXs1poh493YL/nh1UgDHlCrYeVOeDwpfWEkZUv6BT9JYzPCgAWnHT/+G C9SGXiSQ4/DJkOSjDrVtKIjcclrUW9WPlaqt7hKzrya19nyudkCtqJS90HB67HYqfn9q 9LcKlGn3X8YCClfClaRjtNsej8Wpq5xdADd8/GFmjSpio9/LCoY2N++ORCc2foQ7pGPv pEaO9IVPkM7WolGrw8vZRodjcZCwBiwILfZPHja8sFtUyezacta4Cuw65wcn+g9axONq FhTg== X-Forwarded-Encrypted: i=1; AJvYcCUkXZz8W99uzQLu1+NUoSZKhM6pTrNJLQMNG0IKqdcBT3oDy4eBvNmy99I7ylLjvdIM5Btqa8NJwA==@kvack.org X-Gm-Message-State: AOJu0YwBN2hGLr9HCpBRRsbZT2WVOkZCIkE/JLrWD+1sbiSTPFOX2Za/ /jJfgSqVb8A++FvZNsEPRiQeF8UCW5YqaTM+ysrJ8QMHmz4zhGmmGYT3ZX64BroPwWP7r+XdW6k mbg== X-Google-Smtp-Source: AGHT+IGEJut09X73iREifwmtH1pTDK91otd2DkI7NLMgJYUa+WdvvMkSg9t/WQnzDQh+UWszvk42Ntw5lL8= X-Received: from plhu17.prod.google.com ([2002:a17:903:1251:b0:216:2dc4:50c1]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:ecce:b0:215:7446:2151 with SMTP id d9443c01a7336-21a83f36facmr63729185ad.4.1736389855012; Wed, 08 Jan 2025 18:30:55 -0800 (PST) Date: Wed, 8 Jan 2025 18:30:21 -0800 In-Reply-To: <20250109023025.2242447-1-surenb@google.com> Mime-Version: 1.0 References: <20250109023025.2242447-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109023025.2242447-13-surenb@google.com> Subject: [PATCH v8 12/16] mm/debug: print vm_refcnt state when dumping the vma From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Rspamd-Queue-Id: 3DA1740012 X-Rspamd-Server: rspam12 X-Stat-Signature: 6zjcpbcdsjifiwijaf477dwrmgzih3zk X-Rspam-User: X-HE-Tag: 1736389856-859597 X-HE-Meta: U2FsdGVkX19+Il941owk7w/hiIXT8SdERPCZ3ljtinox4Q0PiC03AQcEbRJJ+LZDPSLl0IEmmHoZBIhe7SkLhMRbXyMA5Gl0uVJMcW5PJw3VTqjJglioyITiq4ClMVrH+SF/DX1tv/1NpHHcmRjZZAXFFZTn20igLtIk4ZFvhzkdppUOJW5NF1wC74lQ0AQwLopretAUge+s/7+YzkDe2qARIBCDSEus5Fi0L3ixcQxiFVRVia5RPTApWM5kKcUzFy+za6HhEBeem21mhHv5hvq6ov4skLabmiwHJZsIvamqoXse6ddZBU+uApB8IajB9SJM2SLG0PKn5ONabnUCoMmySAiqC7hBA4kw8dTy4ZEiXLQOZxcw/fu0czkYSVWN5Pv86IX49XLr2iQW/NdaiOGOGx29aktFUHTPr2FxYk8v5EW7mYCM2hxHCdHLg3l4iC2Q93nZvgXuKJM6nw8NuFiaoM4DgUXm58xZKdry+LP6wqDTp2Fv7okwcXmAU6yyb2Vq3h7iH8nrl+4V8kUXUhVjbI8QJjIiDUMP1MyqMAnAuKQyvgqpDMnWktHtxJ1eFKYbnrM9jSGB+rCpLAvv43IsofX0/caZrK3SfvPDceMfrn5CrCA9/NQ4+bH1DNGX+luI2E23uwqdlV57yaduJvunH4RKy+xgsEiuc3XpcJy/B+XiH32NwoCKJSMNKQLnHdsBgenK//vta7Em3Z7SXlr2PdLY1oPu7GOuNVvE9ieOO9D58pcqDwpUVPT0YqNBQN/EeoOzSiDwfzEnh/0KtG86pKstMlvlZfx1T5Yw0K1kIgLJkHWyHVsj12tw0rNfJ5lz8E298zuNrpGjtkKXIwXg7sJIEJtsPUtHYN7zGtp7UgBsOef+IfDrOuvZIxamHwqzcGG9yLp13D0hjTlNetGJk4Eza8DMiWyzuBxBT2Loqs4JeBzG7cJShf32bC5RXG3oJvc9v340ZT5F+/s p8L0uN+O qOKG5NnMlEbDHqAIw9QYTor/6vsWrGR9mGmViQEj0aW13u8CUIvkTq+FHMe+HP3BFC8YgInYM5NMnJpHFuIF3687rXhgYBDHRsNMVJzLSjBN5sIyvRpOUAdwj4OsAAtd2/j/2losWjInOE4uMm/+oLKzG4DNHNX0DDcz15z1zZAblpZMlv4s65/+0mAghkSqiE3do8u+v3T4VQzMR8G1agaR2z18Mouy4nxhdsLIsBz9WiwVH6vtLQq/siWLdPqcua9g+lhjHRXCNiFQlMpB9yscNKZn5Sah+Y3KuukKOahicvicX0KU2nHdJCzPkiy/3QJmRgmB5P1sR2P1k5gllmOCTBctQtupxGBtHtR2RIeYdhBJ0gbTaFzOlJV1OmfvkTF9LsaA4tDfMIb4oUiugbkyeP+QhdgmQjg99DXxZ8Ket1+gudVUEFMmYoRfzhOTPFBZ8bdAJFl2ZS5lJgwfierhF7qS9j4sufLMas63gEdXnDZqofOfLQzjVoxmDT8PDcAzCexhwZgmIJjk+RmxmcAQ2aguFoi1X3cq0CxDQUYqfglr9esel4kyRnuTuw1b0fkRsI5g/ylBxPNcwj7UZXogYfBfzT+GEjEkZ69fwMIULpVPU8VDYCM0xZJsDl9e//FAqPVVc+pVZFNcnYtmcE8mf+ThJG+n3UHU+ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000005, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: vm_refcnt encodes a number of useful states: - whether vma is attached or detached - the number of current vma readers - presence of a vma writer Let's include it in the vma dump. Signed-off-by: Suren Baghdasaryan --- mm/debug.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/mm/debug.c b/mm/debug.c index 8d2acf432385..325d7bf22038 100644 --- a/mm/debug.c +++ b/mm/debug.c @@ -178,6 +178,17 @@ EXPORT_SYMBOL(dump_page); void dump_vma(const struct vm_area_struct *vma) { +#ifdef CONFIG_PER_VMA_LOCK + pr_emerg("vma %px start %px end %px mm %px\n" + "prot %lx anon_vma %px vm_ops %px\n" + "pgoff %lx file %px private_data %px\n" + "flags: %#lx(%pGv) refcnt %x\n", + vma, (void *)vma->vm_start, (void *)vma->vm_end, vma->vm_mm, + (unsigned long)pgprot_val(vma->vm_page_prot), + vma->anon_vma, vma->vm_ops, vma->vm_pgoff, + vma->vm_file, vma->vm_private_data, + vma->vm_flags, &vma->vm_flags, refcount_read(&vma->vm_refcnt)); +#else pr_emerg("vma %px start %px end %px mm %px\n" "prot %lx anon_vma %px vm_ops %px\n" "pgoff %lx file %px private_data %px\n" @@ -187,6 +198,7 @@ void dump_vma(const struct vm_area_struct *vma) vma->anon_vma, vma->vm_ops, vma->vm_pgoff, vma->vm_file, vma->vm_private_data, vma->vm_flags, &vma->vm_flags); +#endif } EXPORT_SYMBOL(dump_vma); From patchwork Thu Jan 9 02:30:22 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13931820 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 64003E77188 for ; Thu, 9 Jan 2025 02:31:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 81CFD6B0083; Wed, 8 Jan 2025 21:31:00 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7A5A06B00A6; Wed, 8 Jan 2025 21:31:00 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5D33B6B00A7; Wed, 8 Jan 2025 21:31:00 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 2E8A26B0083 for ; Wed, 8 Jan 2025 21:31:00 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id E784AB0B32 for ; Thu, 9 Jan 2025 02:30:59 +0000 (UTC) X-FDA: 82986335838.20.FE71D7E Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) by imf11.hostedemail.com (Postfix) with ESMTP id 180AD4000E for ; Thu, 9 Jan 2025 02:30:57 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=hG7JqeMQ; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf11.hostedemail.com: domain of 34DR_ZwYKCIk574r0ot11tyr.p1zyv07A-zzx8npx.14t@flex--surenb.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=34DR_ZwYKCIk574r0ot11tyr.p1zyv07A-zzx8npx.14t@flex--surenb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736389858; a=rsa-sha256; cv=none; b=cMEXzeJv01Ah+au858ZKNKyzv5QY529ST/+LV63siGXI+WEK+hGHXwaitoeUdFlfVNCDuJ YvxypnqDu8pfx2RVb0UeU4FvgVdtwyKMraakPOsjGIsHsYwF+MwWopW0dRol5tB6wNpAQe c/SU8/+3sQSye5iOwlA7WRu81Ga4weE= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=hG7JqeMQ; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf11.hostedemail.com: domain of 34DR_ZwYKCIk574r0ot11tyr.p1zyv07A-zzx8npx.14t@flex--surenb.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=34DR_ZwYKCIk574r0ot11tyr.p1zyv07A-zzx8npx.14t@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736389858; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Ff+5kmdnRB8ZntTSRUYdaKZe919gijOnrfCBbK9Q5zw=; b=O26cbwC/gjKz7kmrg578yNHLXTbBSg36pSnaQyS8AmKzxIrdiQI0JLbzVeQuKZ1TXNEAMH qeA449gZiW+52JCie2c47ViX4N1ikglV/FxOSnmxYqx/Cw3MSDrXzlV7Rn5F+4S8CHLNZs HNtIG16VwG6D5j7NbA1LZvjRLrM48ro= Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-216717543b7so9748015ad.0 for ; Wed, 08 Jan 2025 18:30:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736389857; x=1736994657; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Ff+5kmdnRB8ZntTSRUYdaKZe919gijOnrfCBbK9Q5zw=; b=hG7JqeMQW193lLXuaq6B4cHYUSQ28LAaMuZMgH4T6y1M3VvjJvL4UNx0TFuVl3ZM3g y14/1A21vaHvVmp6AHtgAlzzDLCe52f1543ZBgbZOdtK2UJYtHoBYu2lvVNz7xROdf6R 2jPLKsEZKiET624HSXc+z+qp4bwdtUiwhbTKX3ixRR5vvWBwe/gXk+zfjmJ8pPw1gKjB v4WWuqfrhJBLlQMsysCXIM7OwUQy7rgnleqVFU0zLxoPtWl3r3gf/jY6U+Rjvg/c8zBz lyOXuVwBq90cL5ASFG+o7MvKXQ+lwTjn08GK0bxzYEZ7g2SKsKlQob+mTpnlHnbehzO7 8rHQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736389857; x=1736994657; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Ff+5kmdnRB8ZntTSRUYdaKZe919gijOnrfCBbK9Q5zw=; b=sJHReIPX+O40ZTsUMLn2id2jmn3+lpbkQDduB+wYszTIJorm39DsTntMS0x6wtxNk9 GAh7ONkxUw1vjOKygtwCw5ojs3huF5h/W+mTFXCBWY+x+i/YqJyjhgb/6X4Z07wMGpVj bTN+sid/1FEWpn1LdRqjhuqL2hYSi0g6dSJ14g2CLvgMVgzmB3GOhbIK4yURTLYidc1w JBDrr+lJD5NED0Z9KwU/K1awzmadpYXcHB8Xx1Dw3Xn127WsnBYxZcJLtWlasXfIbRjC FxGHdAKvy/27mUO/4npomwTkLOyt8/BYsg/DFVXDSLXS8HkNREgZQHPZ9/hg2xarfWfX SlRg== X-Forwarded-Encrypted: i=1; AJvYcCU6Z1C5cezG0wEZRV/VCFpJyELFuXQMl1/5KvTvcWGpzPg3eLjvUOTx6G/Zn+xvnCTZbr4RULwrZw==@kvack.org X-Gm-Message-State: AOJu0YyV7n54H8UJTf2YiXQRnLscQHiZx+JTBXZV3p8jpDFnPPa4JbUZ +k74Owo0PpUO4cGFbjMyynNM/6BpQbgJdlHX27vn2C78tUMwc7ONxR8+30/ZswYYoVsoTprjcOm KYQ== X-Google-Smtp-Source: AGHT+IHHneXTiyqKXlP1VpELIhdiUZ7vx0qpWsk0pXxK4l9MO/2MOdaa5NjDmeJ2cE4v/bbSs8TX2rtPRrA= X-Received: from pfar8.prod.google.com ([2002:a05:6a00:a908:b0:728:2357:646a]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a21:350d:b0:1e1:b105:87b with SMTP id adf61e73a8af0-1e88cfdc0f5mr8799971637.23.1736389856900; Wed, 08 Jan 2025 18:30:56 -0800 (PST) Date: Wed, 8 Jan 2025 18:30:22 -0800 In-Reply-To: <20250109023025.2242447-1-surenb@google.com> Mime-Version: 1.0 References: <20250109023025.2242447-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109023025.2242447-14-surenb@google.com> Subject: [PATCH v8 13/16] mm: remove extra vma_numab_state_init() call From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Stat-Signature: roiqi3iddc69bjpqy71tx3mhdj939utt X-Rspam-User: X-Rspamd-Queue-Id: 180AD4000E X-Rspamd-Server: rspam08 X-HE-Tag: 1736389857-31901 X-HE-Meta: U2FsdGVkX18gRSxG2NJIptfQ+jhgipePuhlNSLqtun4IrowicS88+ypvj823o0sHQ7ZaRnPExvC7K2lDAQWJf/YPGWZnH0po4S3dle7GaPmtFK+AQ7NTIyqGVZsdNEmJL+Ym6TvcI1TiMinqYs2LNqFlzd1xXSkSWz8DmkktyV5hi6gu+poocjuVYZghpe9hS8wBUImwAtdNGVZWto5eEv5N7uD7ef3Fva/38ChsFHx8Q0VQBG6+PJeo810J77T2eJJJ6HOtageBVm9bGRJe4dB0vk4BKKDi3eniXgENyqTtiwkcqN/+r8OzFWOpuKUGSyeVjPhz/jWEdMose2sOGk+w60MCYwJB/JwCzf9VcIptSjuANXdkJFA3GZcBgFQ12EPNGCEBGZHAsemZ//G16mbWudoDN72f4Ouknj1vcu93ZJPEK9YSSKNJxGlnWbaGLbk2xjNy8ttmiMxexPF2gnrWsMvZtoz/v3KQz0G0ftB6ANF6dF3vN1y1vTZZEN+sv+p0IO31CnNj3RQbLBOqkkvbZoxKA6p7IKA9G5wTau2G0SrfmPiF+0B4fbRzfDS3ilhRYliIYQVzFaYydbs/x8QcFvGBzkeqmI62eVhOEk7ob2FHudhOrPhmY844T0nqk58AavN2QpiJWj3gYAFxJ3C9ynMEFEtu1FkGWFymemOpyLBzcZ2vJBivmo9Mm4/mK0ma/qHO6pKKp/R19ePt9b91hDq7lgmMh+m8jk8G8LQZuKivoiBpYnbQ/UNS4nauDda2pFuiRybdOABhgp/93xoiBOZg6oR1VttYbIxHONEfsgpbpyB6bHmXsY6BDaUVWVT91400n5HZK+/uwia86y4cAB4VSxP9H+x0bntqQkwS5bI7d+CZzyUjeOCKT3IEFcjfWXBA2QblfPp7MrAoCHJY4y5Mw12bBFwW/VClPpi7Omsqhy2AUGe/nT+4NV3LL0SOuHx9Qc8mCMu6dIJ Iqfwi2QQ SlNIaD1/se2u7JPcVbzxbSzH57lAIZYDap7MWxxEoCz2efpI0Wwrzr+6jgmu+zhQ+StLWqNJUCnw7VOrE21FWRfvdPtT0GMqG1dpl1r1s+weW/EPNK9SFksIU3UI2F4373oL9zB8/UWSeodL6AJuBtMiuF0mLJgIXYGCmZad45YEOH5kODJpw3RrCuhNtyql92/OsNIAlvz3ibr2jP0QNcqkuUJACnno3rFjxmBiZjnkNivuw8a9mltdaIFVYYytd7aoOUheCDkQT4OVFdeisuP5DA6k7Pke5QIumdEy1HP/uFfgiGZt+jNgm3dP0KmXVAui5swHhWeBM6PUmFazN/cpsNJmZdMBEG4VzgEYpWq0hiqYXtw0UkCu1KycduaamTN4DZ6AL9maqKklUdYeyxfPxPOe6S8GcFUlhG+aqfR/ZyiGcIWkBCeggNxk1M4Ly/1dyr9/Woycz92Z0TZubygoy8MfqIBkJMzKgrrEOm2W1VlogogcoT/3ngHOyaj2emwjSoPprAwy4LQB64cUSsUfh/B4z807XvK4FBWeE3G8y8Xd8ViUWaiRSmUYCz/Ob1Nqh3rSkKCa4GGv51YPV6RbEyBECpSr5lNTmBkSMpyIeZOF2MucgIojhsoldKQL+2QbmG+Hc/VQchls93rvZhpB1rUJZ63V0Lhgt X-Bogosity: Ham, tests=bogofilter, spamicity=0.059440, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: vma_init() already memset's the whole vm_area_struct to 0, so there is no need to an additional vma_numab_state_init(). Signed-off-by: Suren Baghdasaryan Reviewed-by: Vlastimil Babka --- include/linux/mm.h | 1 - 1 file changed, 1 deletion(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index ec7c064792ff..aca65cc0a26e 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -948,7 +948,6 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) vma->vm_mm = mm; vma->vm_ops = &vma_dummy_vm_ops; INIT_LIST_HEAD(&vma->anon_vma_chain); - vma_numab_state_init(vma); vma_lock_init(vma, false); } From patchwork Thu Jan 9 02:30:23 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13931821 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C01FE77188 for ; Thu, 9 Jan 2025 02:31:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 55EF06B00A6; Wed, 8 Jan 2025 21:31:02 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 50D2B6B00A7; Wed, 8 Jan 2025 21:31:02 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3618B6B00A8; Wed, 8 Jan 2025 21:31:02 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 103DF6B00A6 for ; Wed, 8 Jan 2025 21:31:02 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id BEC8C1615A1 for ; Thu, 9 Jan 2025 02:31:01 +0000 (UTC) X-FDA: 82986335922.06.362029B Received: from mail-oa1-f73.google.com (mail-oa1-f73.google.com [209.85.160.73]) by imf17.hostedemail.com (Postfix) with ESMTP id 058344000A for ; Thu, 9 Jan 2025 02:30:59 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=Wh4HgmeO; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf17.hostedemail.com: domain of 34jR_ZwYKCIs796t2qv33v0t.r310x29C-11zAprz.36v@flex--surenb.bounces.google.com designates 209.85.160.73 as permitted sender) smtp.mailfrom=34jR_ZwYKCIs796t2qv33v0t.r310x29C-11zAprz.36v@flex--surenb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736389860; a=rsa-sha256; cv=none; b=N7Ss9cpDTyA601l9DHmcS1lrm1MwTuBL6VoTVbiEHfeA3Pq0bcCFwOh3quRCwvwq5khceU 558JSY4GRg1xQ4UrtiAJ610eEdxUpDPx+LThXgEvbccitRt1CZ+iDRlglfwscviqgBxLML w5HmrBH3sJlkP2B8oMbGFJwoXGybaZ8= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=Wh4HgmeO; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf17.hostedemail.com: domain of 34jR_ZwYKCIs796t2qv33v0t.r310x29C-11zAprz.36v@flex--surenb.bounces.google.com designates 209.85.160.73 as permitted sender) smtp.mailfrom=34jR_ZwYKCIs796t2qv33v0t.r310x29C-11zAprz.36v@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736389860; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=7tJeInLbN8BEQlNwxZIs7XzIeMe2D6U4SFT5BJyi6RU=; b=jQOf9hzfKKv4eVYsKCBOI9yLsN8lXvjKmq8Fuf8GliffTxQcZvPMd8V+VaBTdXXCso3Wks +4anPh5epRs1zRObQz0iIDaQntMj8F3H2n0hGz3gtYtdH5h1c/axgQGH9w25Ir7AtkTJoR P7W86+HPwu1z0WkGTZdPjCNlOCIxMH0= Received: by mail-oa1-f73.google.com with SMTP id 586e51a60fabf-29fbc131cadso1005484fac.0 for ; Wed, 08 Jan 2025 18:30:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736389859; x=1736994659; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=7tJeInLbN8BEQlNwxZIs7XzIeMe2D6U4SFT5BJyi6RU=; b=Wh4HgmeOOtFV1rTpou4jhVVgNKYfgoUqH8DUe83L9tfutTIjrp4t+PLWN6fzQV+CWM 6SBcXTM6Bya7+FC/USQY9X7QYBm8czE4dnf5JhOls4cPw3at9ANYSjLs5vfHsFS9xC1k kQz3qE/x70UeSomxxB2YkorgElgQzp9/a9GXmHtNb1HqCtetBsG1ZieEAfgY9ba64wGh 4WICFC3ZuccbDPuqVUF4sunqj2VFtnGbWyFvZ3/4/heyJIk+uXspMUuLnL4O1B0gFlMV kiPG7wCnWwVuQZVoAXM+AnkgIckSFvBlbSlPYc5UVxq6Pg4tG2iJcxUa9s4zAxhKlE+U Jmcw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736389859; x=1736994659; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=7tJeInLbN8BEQlNwxZIs7XzIeMe2D6U4SFT5BJyi6RU=; b=KTlsyUTZ1LOFMCTqvD20q36hCvYvyhJaxEUQIU+cFMighHJiWGNOzfpm48XA1h4V8F ftJPfz46YmiPqDp31IpnDhqPZ052iuYUGhvf9jubP/dFRBag541klY0f1e0fAw8A2phb uMNoqYeagW0HUhn6NaIuakG8fN3175vrbhw5KVTNFK1axTNdZhf5GTSZaugPhREc92z4 ZqqWfRggdGu448O9uzkPBLXBli/8TipiCNLIV7cznJyti+ZibXLByjUnHaX9xYFrzh2S 5r2o/GBhRnsdh98G8yXVV6+UeZ53A9dkaStUlcMXqzqL+RppUzwp2/S4eOHrjsjn8ZFA /wQg== X-Forwarded-Encrypted: i=1; AJvYcCXY3f5ZpxX4hMwtqSO4jmxxJ4egePW65sLsV5vhS+eTXmnLTmdmzZTj2KcWUH8sXZ5Gjv4ExNzKww==@kvack.org X-Gm-Message-State: AOJu0YyIbn6Q/67+yzRdNgdRBP4Hyf6nbUU43Hj289hIQmMHNPaN0nwn SRSfz56KTa7iiWa1TgHaPgENAMy0JigJsRhJHYJ3bxGIdsp6A0OQzX49X9cKv0xsgHTuDvyd0pP o3A== X-Google-Smtp-Source: AGHT+IHGMXCQEV8FC6jUm6LzXHjwAdfbc9NVyk/P0Er0Y7oYPXQ/fIgh5u42bhXHdFVsWWVl9KMqX04Ngxs= X-Received: from oabqt18.prod.google.com ([2002:a05:6870:6e12:b0:29f:e638:5c2c]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6870:7d8e:b0:29a:ea3b:a68e with SMTP id 586e51a60fabf-2aabe8aa92bmr945059fac.0.1736389858958; Wed, 08 Jan 2025 18:30:58 -0800 (PST) Date: Wed, 8 Jan 2025 18:30:23 -0800 In-Reply-To: <20250109023025.2242447-1-surenb@google.com> Mime-Version: 1.0 References: <20250109023025.2242447-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109023025.2242447-15-surenb@google.com> Subject: [PATCH v8 14/16] mm: prepare lock_vma_under_rcu() for vma reuse possibility From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 058344000A X-Stat-Signature: 1zrrbimpe1nea9t6nyk7kznuat7hh1ky X-HE-Tag: 1736389859-42691 X-HE-Meta: U2FsdGVkX1/2BKEfsW90jaOswILqWd9S9q9jag8xjSRHzkEfrukA1SoShraiIUGiDAjWl+KK3u8/bLJCy57KQwz1voV9nPuL9amdEa8bgij1aWclqJwtJokO8R2jy77wxnauO8XIwHxudVBQVvuFfk17PS6q9SHnrVVQZx2Y9b80wCSAtn2cH40Te4lRd6ho2BIwYfnR85zF21IKkdiITgfsVAsoZR9pPk7cIpaKsZRpJKFzh9DxNhioSILj0x00Lumw2iRw/qf3IYYHT/PVgZ/hCcKmyQlC7JScqWX20NHiyp/SdHh/nLxAFTJs5fyEDXfNU8t20iMuRCzwnCh07cRbEs/pcSvZ0E/57/m2H5oRr5em7bR8yOx/u48LZ3LrxWTLzobt5BRipod39Obc4FVSkI7hOx/zKV1NWOC4XxkHRXFhqOSTahUvs0oT3vUpxd0nEnahgWTLwzaFiem4+51gAZud0BJoLNRHdoewTjjLMhaC9NctaYb/uifHjFyYd21Yp37w3zCpe7gWJM9+17AgA9p20RT67lQBKxV8E8u2RuieVfU5pL7PziIm3gbQVOz6XENg8a+mJGeRugTiWzyKZTsr03mPUXMxB0QFWtWfRyiTMmPbvZXdZ9sd7/V+HOmhvxWaU8ZVb8IAA7HIO6unwOiifF7BfwbKk2zcZzqtiUsv0d4XF9OSbEXkc+4O966nhmshbWYZzGa142Qs7toB/oV1mwbxqQ2QhgDMxtcqm63UuBIh4u6BLwLQOj/nM+ACmnVUvlhyRacB0CA0MX8Pi+5IbBE9k7KZ09n6ifFVrzp4b9LZGHDLM5s4wf96FRWefIucy3yedS6MsQD8G4XzLgZKyw51bRtRWnLlNRm4eECfnzavLEyFtD9ZXufm7RBWbNp/mxtNxePAEt/vt56GONWgPf/cyIQdzvoRPfGmotB8fqKe07z/MgUKhiukT24U0YM0jI6S/cXSog6 GMNE/aUO rMvxucteJAg/r2pDH8SptyQNoDJUX7e6SD7DK2f9P0X1fUyMhU9xoyx6515bCFJGSKl8ULWxqD28jpplXat+haACETpUIDQZaKcqFgryyQjNC8rbatfEDsB7J1Kc052/WNBT9gUt7DAHM2+0q/7UjlFNVM7eOHSrtjmu9Rnq0Aec8WKoJIVC1LfPljUqXFkxIqi/unhXHRr/+bcKvKLu5EowuZMbNOyy4dwoWfrJ2H8qXb3I/wkOO1OOtfVOSoa06QxJGRO4TrzlQmFaMmXg8akrLb1IpNVjuHhV8lFb4yozMMFe9exyxB6LoA8kLN5NTfwFli2PqsRpS8gNZSwZirfO+L6o7251kxI70cX+nJq8E85sOAPHnkfdMkwI07v0CcoOcQvOcvbLcVQBVp6p0KkXATZ+E/ByYWnWKQS97CoiIvcdJop9ThcZtTf99D150FuH7dtl22IIIHzxjbue7emh+xtEMLkH5cDPjowA+cxot7UObShNoo1quxTvTF6R2/zXC4uS+9NPHMnP4Se42eRcMH8Fvxo2ulo1oDAYU+Rnugyn/FRdKdDE9N66L8IOubIGpj7bX6RruKAZhhqp+SPOVv4lBMZe3vcfQQ1RFwMdhgPsDwS292TE9sTq46TKW318UXH/tkhk6qEhTt1lGhvnokU6byoman7Q1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Once we make vma cache SLAB_TYPESAFE_BY_RCU, it will be possible for a vma to be reused and attached to another mm after lock_vma_under_rcu() locks the vma. lock_vma_under_rcu() should ensure that vma_start_read() is using the original mm and after locking the vma it should ensure that vma->vm_mm has not changed from under us. Signed-off-by: Suren Baghdasaryan Reviewed-by: Vlastimil Babka --- include/linux/mm.h | 10 ++++++---- mm/memory.c | 7 ++++--- 2 files changed, 10 insertions(+), 7 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index aca65cc0a26e..1d6b1563b956 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -737,8 +737,10 @@ static inline void vma_refcount_put(struct vm_area_struct *vma) * Try to read-lock a vma. The function is allowed to occasionally yield false * locked result to avoid performance overhead, in which case we fall back to * using mmap_lock. The function should never yield false unlocked result. + * False locked result is possible if mm_lock_seq overflows or if vma gets + * reused and attached to a different mm before we lock it. */ -static inline bool vma_start_read(struct vm_area_struct *vma) +static inline bool vma_start_read(struct mm_struct *mm, struct vm_area_struct *vma) { int oldcnt; @@ -749,7 +751,7 @@ static inline bool vma_start_read(struct vm_area_struct *vma) * we don't rely on for anything - the mm_lock_seq read against which we * need ordering is below. */ - if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(vma->vm_mm->mm_lock_seq.sequence)) + if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(mm->mm_lock_seq.sequence)) return false; /* @@ -772,7 +774,7 @@ static inline bool vma_start_read(struct vm_area_struct *vma) * after it has been unlocked. * This pairs with RELEASE semantics in vma_end_write_all(). */ - if (unlikely(vma->vm_lock_seq == raw_read_seqcount(&vma->vm_mm->mm_lock_seq))) { + if (unlikely(vma->vm_lock_seq == raw_read_seqcount(&mm->mm_lock_seq))) { vma_refcount_put(vma); return false; } @@ -906,7 +908,7 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, #else /* CONFIG_PER_VMA_LOCK */ static inline void vma_lock_init(struct vm_area_struct *vma, bool reset_refcnt) {} -static inline bool vma_start_read(struct vm_area_struct *vma) +static inline bool vma_start_read(struct mm_struct *mm, struct vm_area_struct *vma) { return false; } static inline void vma_end_read(struct vm_area_struct *vma) {} static inline void vma_start_write(struct vm_area_struct *vma) {} diff --git a/mm/memory.c b/mm/memory.c index fe1b47c34052..a8e7e794178e 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -6465,7 +6465,7 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, if (!vma) goto inval; - if (!vma_start_read(vma)) + if (!vma_start_read(mm, vma)) goto inval; /* @@ -6475,8 +6475,9 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, * fields are accessible for RCU readers. */ - /* Check since vm_start/vm_end might change before we lock the VMA */ - if (unlikely(address < vma->vm_start || address >= vma->vm_end)) + /* Check if the vma we locked is the right one. */ + if (unlikely(vma->vm_mm != mm || + address < vma->vm_start || address >= vma->vm_end)) goto inval_end_read; rcu_read_unlock(); From patchwork Thu Jan 9 02:30:24 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13931822 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 25D8CE77188 for ; Thu, 9 Jan 2025 02:31:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A8ECE6B00A9; Wed, 8 Jan 2025 21:31:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A16AB6B00AA; Wed, 8 Jan 2025 21:31:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 81E666B00AB; Wed, 8 Jan 2025 21:31:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 5E09E6B00A9 for ; Wed, 8 Jan 2025 21:31:04 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 199B745996 for ; Thu, 9 Jan 2025 02:31:04 +0000 (UTC) X-FDA: 82986336048.17.424A5DB Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) by imf19.hostedemail.com (Postfix) with ESMTP id 4C43C1A000E for ; Thu, 9 Jan 2025 02:31:02 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=ZFXiyyNs; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf19.hostedemail.com: domain of 35DR_ZwYKCI09B8v4sx55x2v.t532z4BE-331Crt1.58x@flex--surenb.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=35DR_ZwYKCI09B8v4sx55x2v.t532z4BE-331Crt1.58x@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736389862; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=0mp5dFwezEdO/jOng82s8ZxmFNIgL5tGd8iP4GBRZps=; b=043erS22wUgnScifPWSR/042aLr7+S9SqZyUQhZt4GO5ZbevNVdvVvaeB9SMRxWOa2+oCK NWCvjGk8F4A63raUoH5xnBQKfsW7SRta5hZJZE+4jInX6j20InKu1RCYkNiRF/ei5WZVuX 2Cm68mFMiwM5kICLBLV1moZilpOiB5Y= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736389862; a=rsa-sha256; cv=none; b=U6ISzcYz7BfjBprZlbYGk9rEFRopdckz2PXldFLonfeiOQGRWQTmEPukyxpUd+ifGwH11N 6VMb4NSknDw1SLnMYsjexCtEHBWQNUYfp9FDJdYGHu4mzxTeOEFpPrnDhxJE+O+hfNyThg SiIeZcfJbovLfsl/grycgyyKDip35co= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=ZFXiyyNs; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf19.hostedemail.com: domain of 35DR_ZwYKCI09B8v4sx55x2v.t532z4BE-331Crt1.58x@flex--surenb.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=35DR_ZwYKCI09B8v4sx55x2v.t532z4BE-331Crt1.58x@flex--surenb.bounces.google.com Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2ee46799961so1167868a91.2 for ; Wed, 08 Jan 2025 18:31:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736389861; x=1736994661; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=0mp5dFwezEdO/jOng82s8ZxmFNIgL5tGd8iP4GBRZps=; b=ZFXiyyNsiU6eiEBLf19yijTi+Fqg+z0kmidt5EjQeJRH3iZ3kOnqj7ku88OXGvQ8Fz 5ZZRWl2M+nIJX+XlJNbRcXg5J28yHqd3CKDtR8lR2afj9p++sND4i1mfq8LSzws1ZYwB Z9szNExvN+bOq/myFeOLn4LVDDlnkREvJ2WYFMnncTfMR7u1pdSylvksV8KR4ynCv8jF 3D5P3NHoKlPXlGS8fpaHzpXB8I1QH0LR0Oiu5ZBOo7YMp8OCCgAGRd4kb9olnvDztctP /sWW87O81IIm3ira0DrIWMqEjifqO6PPS4djDfR6V3owzNQwV9BFiwJ1e6o5zIpXtcqc /KtA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736389861; x=1736994661; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=0mp5dFwezEdO/jOng82s8ZxmFNIgL5tGd8iP4GBRZps=; b=mQxbR414Gshb1TteZlqgepiGu81P+Cik7bTL1/1KR9CRJgZsrAEPZYWv1zyEZyU+fm XuOEi9rLk7NDVdSYBE0uP+vK8h4UySeh3JREBOX94ywkBL+tTZDwKz8PJ63i+/9kFX3j oeqa6XCLSFXZEb3P0Lzzl1QU/goD7wR7QogLQ7cyETwUH+rZBX8d4KUUBl9uBh87Hcyv AJiPlKnjGa3HV/2z/CjgSA3aSOuEziiLOqmUgm4qmaaom59y8uFix2Yucjf8gTB6ZCzO ZyFLNgEG2xbACH81WG4NwMAva17LJHsg/ynv8WK5mEWCCKwijn5J57RZV1H4lwV72zgZ nMSQ== X-Forwarded-Encrypted: i=1; AJvYcCUH+2z8ViW+oo+P/GvEDt8CGFhlxJEM/UC5YUf0FQ7vzJ52gYoVyU7bX8MKgbrsU97WFUz/vD4DdQ==@kvack.org X-Gm-Message-State: AOJu0YzTTtz/AKKvtx2mNy7tO9qg85yKv1GSXUr8FgjYr1wxg7GjgrRj 1Dy5z+usUBfMrXcVoGuY+cuUhBuY23PLr0KQf9gpfAeo14J1eJzwD0meVvvCZRZne8992pyrBeP wKA== X-Google-Smtp-Source: AGHT+IEZRV1/vQ9FjpK7xNQnInX31Kc2+w8+X4QP91iZvzjqZarVksUTQcCfVUNTQlhwPAY/KfocNKNXlgE= X-Received: from pjbsz8.prod.google.com ([2002:a17:90b:2d48:b0:2e9:38ea:ca0f]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:2747:b0:2f2:a664:df1a with SMTP id 98e67ed59e1d1-2f548e9c9bcmr7618672a91.2.1736389860840; Wed, 08 Jan 2025 18:31:00 -0800 (PST) Date: Wed, 8 Jan 2025 18:30:24 -0800 In-Reply-To: <20250109023025.2242447-1-surenb@google.com> Mime-Version: 1.0 References: <20250109023025.2242447-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109023025.2242447-16-surenb@google.com> Subject: [PATCH v8 15/16] mm: make vma cache SLAB_TYPESAFE_BY_RCU From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Stat-Signature: uf5smi7nknz7h93msgm7ksw681nqaify X-Rspamd-Queue-Id: 4C43C1A000E X-Rspam-User: X-Rspamd-Server: rspam01 X-HE-Tag: 1736389862-228969 X-HE-Meta: U2FsdGVkX1+dvIVps1Vuv4Np8tA2PWNvx67EBkAAmrGXOezxZg+eZVRwXFrBKho5gT/fmdAi62ut7qXjsvVUFdaWjDzYDUeRkBGZjkNvWr1WFSTmhjSq7OkWSTn+OKCtulbF4IkXBsMQlKdWyClTyFHC0MC/JLiSx2ZAXksEEbuAv3txEm23WC9+OPLg1OlXx+pkWgtNL7GxtViYTU8kh1gAe8tH5d/pY/MLheWm2vgY6cZ3lmdy1WVbwTwiGnhl6tydNhgQSIOTaahaqUcyERgFoLfegpJTj3WR32OwgP75EhUXte/jKQ1W5mO2qBnLhHOx8iCNFm4N5p5kMka92hD5g8gH7d2NSGOUNCHPgArrOHNYmjJ5KiiIJAtoMhWMWq0C7J34MkTZQ7EUnr5BF/84SieHz2uDd+gM7F7Y0yXi7Lpcxv5D54vM0nzpsQz+8CCWJkcD0wh4//TjL25N97xVhr+PDIDeknetmRWwEvYWr52+KP0tP1ulq2GetSulZxsQO7aljHt1uB+hqcutvKyTp+mn7zfHrN8u/h1yw1q/i3hcbg/Yc2hV/oqwUUPs4NrOAulCZJXnQBNDYZd032zbyggfUto/I8C13qE9eUF+8UMflFRqWdv4A1JndzKNrgHW2whcKz3FreMxQoGpENVS2loGoVS/xvgGWexPfCdga0y8oHXhGaAPytW5bPUBHhv1vv8ifJNrmd4ulzwvQVf/r4XxGRczOJW84e7lqii3GKFXANXIzjHoIecA+OCYsXt/rfp7c6wIG8WGkGtVwGqtOXzWwdaYdZjNR5mVfpTbdg1Yv5C9WicuLbK51xAybswgOloqu+busG6WgDCzWpbRWjj7NNMmchWorouaZrHPUgijndK3RhawZqce0PrPPghrpo/lXPaske4zhLi8CmbQ6S/3+BCXp5aofYsgQM3SoEy8xie76bjZFPH7eD6AY1ZM8H8V8mTTcdhekCR BA+vcarn oh3dGbg77xjEof0TJ0y94jxMia1AHUXPdJX63v1JzHeVGrbd60r7US+dBSVj0s3H65SisV0XLOUofl7zh4lMyLxteZeVyAYwhqY4f4Gu4o/i3aQ6KWW98HQ88bfoOlrPXFge0EX1JSQIa4vTB2DHuL7dwdVyyS7TiRKS9zqzWwjQrxvwYHc1X7KXzHSeWhpgSNW+BhkUJtHa2L1lJIO9S6jKxrkfLVlgcCJgmPJ58ogK5uMALm/k8htoiL/tvtIL+0WjEaB1QZB6udXK301wUT/+CPgwuaZIkOJxk6Bq1qZKdOLKVNzzwMmvplCHSWqq8uXZ6xFuU0KNn/yNxQmpoej+n9V1GKc1zOC0O3zm/WmBpi0T01g7y1SX2w3QRBsFjXl1G9EWUaO0M48KWUDBsxNInk9uFZ0zYDDaoV/WLe3lQs3vBXVqnkFaFPyhImEoiTqmix3ARMT/+ntnjw7e5toACzYJOusQ6c51ytqerN4dtWtv+vfkCVLdeJgF54wbATAOFlpn7J3gM6eAgmgVrQHsLlKoudYqZdfXdl6cgWU29Zxk+vtdVSuaL0+uNWRDjheo2R0tCAxKGJGhgxVWBRWiRyev4OYMZHbFyfYqugAMUjMp001mjCMo7PWlBFQT+n/tmdvxjV52MVr/6SYfWAyqnigLcDWPpyjfo X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: To enable SLAB_TYPESAFE_BY_RCU for vma cache we need to ensure that object reuse before RCU grace period is over will be detected by lock_vma_under_rcu(). Current checks are sufficient as long as vma is detached before it is freed. The only place this is not currently happening is in exit_mmap(). Add the missing vma_mark_detached() in exit_mmap(). Another issue which might trick lock_vma_under_rcu() during vma reuse is vm_area_dup(), which copies the entire content of the vma into a new one, overriding new vma's vm_refcnt and temporarily making it appear as attached. This might trick a racing lock_vma_under_rcu() to operate on a reused vma if it found the vma before it got reused. To prevent this situation, we should ensure that vm_refcnt stays at detached state (0) when it is copied and advances to attached state only after it is added into the vma tree. Introduce vma_copy() which preserves new vma's vm_refcnt and use it in vm_area_dup(). Since all vmas are in detached state with no current readers when they are freed, lock_vma_under_rcu() will not be able to take vm_refcnt after vma got detached even if vma is reused. Finally, make vm_area_cachep SLAB_TYPESAFE_BY_RCU. This will facilitate vm_area_struct reuse and will minimize the number of call_rcu() calls. Signed-off-by: Suren Baghdasaryan --- include/linux/mm.h | 2 - include/linux/mm_types.h | 10 +++-- include/linux/slab.h | 6 --- kernel/fork.c | 72 ++++++++++++++++++++------------ mm/mmap.c | 3 +- mm/vma.c | 11 ++--- mm/vma.h | 2 +- tools/testing/vma/vma_internal.h | 7 +--- 8 files changed, 59 insertions(+), 54 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 1d6b1563b956..a674558e4c05 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -258,8 +258,6 @@ void setup_initial_init_mm(void *start_code, void *end_code, struct vm_area_struct *vm_area_alloc(struct mm_struct *); struct vm_area_struct *vm_area_dup(struct vm_area_struct *); void vm_area_free(struct vm_area_struct *); -/* Use only if VMA has no other users */ -void __vm_area_free(struct vm_area_struct *vma); #ifndef CONFIG_MMU extern struct rb_root nommu_region_tree; diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 2d83d79d1899..93bfcd0c1fde 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -582,6 +582,12 @@ static inline void *folio_get_private(struct folio *folio) typedef unsigned long vm_flags_t; +/* + * freeptr_t represents a SLUB freelist pointer, which might be encoded + * and not dereferenceable if CONFIG_SLAB_FREELIST_HARDENED is enabled. + */ +typedef struct { unsigned long v; } freeptr_t; + /* * A region containing a mapping of a non-memory backed file under NOMMU * conditions. These are held in a global tree and are pinned by the VMAs that @@ -695,9 +701,7 @@ struct vm_area_struct { unsigned long vm_start; unsigned long vm_end; }; -#ifdef CONFIG_PER_VMA_LOCK - struct rcu_head vm_rcu; /* Used for deferred freeing. */ -#endif + freeptr_t vm_freeptr; /* Pointer used by SLAB_TYPESAFE_BY_RCU */ }; /* diff --git a/include/linux/slab.h b/include/linux/slab.h index 10a971c2bde3..681b685b6c4e 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -234,12 +234,6 @@ enum _slab_flag_bits { #define SLAB_NO_OBJ_EXT __SLAB_FLAG_UNUSED #endif -/* - * freeptr_t represents a SLUB freelist pointer, which might be encoded - * and not dereferenceable if CONFIG_SLAB_FREELIST_HARDENED is enabled. - */ -typedef struct { unsigned long v; } freeptr_t; - /* * ZERO_SIZE_PTR will be returned for zero sized kmalloc requests. * diff --git a/kernel/fork.c b/kernel/fork.c index 9d9275783cf8..770b973a099c 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -449,6 +449,41 @@ struct vm_area_struct *vm_area_alloc(struct mm_struct *mm) return vma; } +static void vma_copy(const struct vm_area_struct *src, struct vm_area_struct *dest) +{ + dest->vm_mm = src->vm_mm; + dest->vm_ops = src->vm_ops; + dest->vm_start = src->vm_start; + dest->vm_end = src->vm_end; + dest->anon_vma = src->anon_vma; + dest->vm_pgoff = src->vm_pgoff; + dest->vm_file = src->vm_file; + dest->vm_private_data = src->vm_private_data; + vm_flags_init(dest, src->vm_flags); + memcpy(&dest->vm_page_prot, &src->vm_page_prot, + sizeof(dest->vm_page_prot)); + /* + * src->shared.rb may be modified concurrently, but the clone + * will be reinitialized. + */ + data_race(memcpy(&dest->shared, &src->shared, sizeof(dest->shared))); + memcpy(&dest->vm_userfaultfd_ctx, &src->vm_userfaultfd_ctx, + sizeof(dest->vm_userfaultfd_ctx)); +#ifdef CONFIG_ANON_VMA_NAME + dest->anon_name = src->anon_name; +#endif +#ifdef CONFIG_SWAP + memcpy(&dest->swap_readahead_info, &src->swap_readahead_info, + sizeof(dest->swap_readahead_info)); +#endif +#ifndef CONFIG_MMU + dest->vm_region = src->vm_region; +#endif +#ifdef CONFIG_NUMA + dest->vm_policy = src->vm_policy; +#endif +} + struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) { struct vm_area_struct *new = kmem_cache_alloc(vm_area_cachep, GFP_KERNEL); @@ -458,11 +493,7 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) ASSERT_EXCLUSIVE_WRITER(orig->vm_flags); ASSERT_EXCLUSIVE_WRITER(orig->vm_file); - /* - * orig->shared.rb may be modified concurrently, but the clone - * will be reinitialized. - */ - data_race(memcpy(new, orig, sizeof(*new))); + vma_copy(orig, new); vma_lock_init(new, true); INIT_LIST_HEAD(&new->anon_vma_chain); vma_numab_state_init(new); @@ -471,7 +502,7 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) return new; } -void __vm_area_free(struct vm_area_struct *vma) +void vm_area_free(struct vm_area_struct *vma) { /* The vma should be detached while being destroyed. */ vma_assert_detached(vma); @@ -480,25 +511,6 @@ void __vm_area_free(struct vm_area_struct *vma) kmem_cache_free(vm_area_cachep, vma); } -#ifdef CONFIG_PER_VMA_LOCK -static void vm_area_free_rcu_cb(struct rcu_head *head) -{ - struct vm_area_struct *vma = container_of(head, struct vm_area_struct, - vm_rcu); - - __vm_area_free(vma); -} -#endif - -void vm_area_free(struct vm_area_struct *vma) -{ -#ifdef CONFIG_PER_VMA_LOCK - call_rcu(&vma->vm_rcu, vm_area_free_rcu_cb); -#else - __vm_area_free(vma); -#endif -} - static void account_kernel_stack(struct task_struct *tsk, int account) { if (IS_ENABLED(CONFIG_VMAP_STACK)) { @@ -3144,6 +3156,11 @@ void __init mm_cache_init(void) void __init proc_caches_init(void) { + struct kmem_cache_args args = { + .use_freeptr_offset = true, + .freeptr_offset = offsetof(struct vm_area_struct, vm_freeptr), + }; + sighand_cachep = kmem_cache_create("sighand_cache", sizeof(struct sighand_struct), 0, SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_TYPESAFE_BY_RCU| @@ -3160,8 +3177,9 @@ void __init proc_caches_init(void) sizeof(struct fs_struct), 0, SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT, NULL); - vm_area_cachep = KMEM_CACHE(vm_area_struct, - SLAB_HWCACHE_ALIGN|SLAB_NO_MERGE|SLAB_PANIC| + vm_area_cachep = kmem_cache_create("vm_area_struct", + sizeof(struct vm_area_struct), &args, + SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_TYPESAFE_BY_RCU| SLAB_ACCOUNT); mmap_init(); nsproxy_cache_init(); diff --git a/mm/mmap.c b/mm/mmap.c index cda01071c7b1..7aa36216ecc0 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1305,7 +1305,8 @@ void exit_mmap(struct mm_struct *mm) do { if (vma->vm_flags & VM_ACCOUNT) nr_accounted += vma_pages(vma); - remove_vma(vma, /* unreachable = */ true); + vma_mark_detached(vma); + remove_vma(vma); count++; cond_resched(); vma = vma_next(&vmi); diff --git a/mm/vma.c b/mm/vma.c index 93ff42ac2002..0a5158d611e3 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -406,19 +406,14 @@ static bool can_vma_merge_right(struct vma_merge_struct *vmg, /* * Close a vm structure and free it. */ -void remove_vma(struct vm_area_struct *vma, bool unreachable) +void remove_vma(struct vm_area_struct *vma) { might_sleep(); vma_close(vma); if (vma->vm_file) fput(vma->vm_file); mpol_put(vma_policy(vma)); - if (unreachable) { - vma_mark_detached(vma); - __vm_area_free(vma); - } else { - vm_area_free(vma); - } + vm_area_free(vma); } /* @@ -1201,7 +1196,7 @@ static void vms_complete_munmap_vmas(struct vma_munmap_struct *vms, /* Remove and clean up vmas */ mas_set(mas_detach, 0); mas_for_each(mas_detach, vma, ULONG_MAX) - remove_vma(vma, /* unreachable = */ false); + remove_vma(vma); vm_unacct_memory(vms->nr_accounted); validate_mm(mm); diff --git a/mm/vma.h b/mm/vma.h index 63dd38d5230c..f51005b95b39 100644 --- a/mm/vma.h +++ b/mm/vma.h @@ -170,7 +170,7 @@ int do_vmi_munmap(struct vma_iterator *vmi, struct mm_struct *mm, unsigned long start, size_t len, struct list_head *uf, bool unlock); -void remove_vma(struct vm_area_struct *vma, bool unreachable); +void remove_vma(struct vm_area_struct *vma); void unmap_region(struct ma_state *mas, struct vm_area_struct *vma, struct vm_area_struct *prev, struct vm_area_struct *next); diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_internal.h index 2ce032943861..49a85ce0d45a 100644 --- a/tools/testing/vma/vma_internal.h +++ b/tools/testing/vma/vma_internal.h @@ -697,14 +697,9 @@ static inline void mpol_put(struct mempolicy *) { } -static inline void __vm_area_free(struct vm_area_struct *vma) -{ - free(vma); -} - static inline void vm_area_free(struct vm_area_struct *vma) { - __vm_area_free(vma); + free(vma); } static inline void lru_add_drain(void) From patchwork Thu Jan 9 02:30:25 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13931823 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D8883E77199 for ; Thu, 9 Jan 2025 02:31:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A92846B00AA; Wed, 8 Jan 2025 21:31:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A19096B00AB; Wed, 8 Jan 2025 21:31:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7F4466B00AC; Wed, 8 Jan 2025 21:31:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 569DD6B00AA for ; Wed, 8 Jan 2025 21:31:06 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 100B31C83DD for ; Thu, 9 Jan 2025 02:31:06 +0000 (UTC) X-FDA: 82986336132.03.19242E9 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) by imf05.hostedemail.com (Postfix) with ESMTP id 2E86110000C for ; Thu, 9 Jan 2025 02:31:04 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=apac9xdp; spf=pass (imf05.hostedemail.com: domain of 35jR_ZwYKCI8BDAx6uz77z4x.v75416DG-553Etv3.7Az@flex--surenb.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=35jR_ZwYKCI8BDAx6uz77z4x.v75416DG-553Etv3.7Az@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736389864; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=jMJU/3gn80Bp5VROzRR3pKDE/HRHDjLKawYxTpkzfvE=; b=TMJb6VW1oIT/3zHhbF8UYlzInUYiJu2CcBvYZNKKPnyvs+9cuFMIfQ1aT4fuSkBSOU4tNo 0n1t+rp6K7Ex9jxBZ+PPg5zs46aZBqNiSRu7DSfVqivEZYdyvP6OVdqR9JqFJ8B3T/qFo9 SewCPYYNW/m+SnHQTa/RuuiFNlNBTe0= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=apac9xdp; spf=pass (imf05.hostedemail.com: domain of 35jR_ZwYKCI8BDAx6uz77z4x.v75416DG-553Etv3.7Az@flex--surenb.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=35jR_ZwYKCI8BDAx6uz77z4x.v75416DG-553Etv3.7Az@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736389864; a=rsa-sha256; cv=none; b=L5gFHfV+C/AIR3sMeiJ3rk+Qv005P0i5d3c+Ljgc9Ya5i/FT1VtQkOj83ZYfEUAiZ3N11q db+9hBcDh1Mtd/0FCAMQPiALsxoboOuMTzGSP8V7ZmtFLeKPbtfWytXXrnWXenZIBzzjjv X5pdk7E+MkVhp4q5MLLeuYNsAaSIzCo= Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-216430a88b0so7198385ad.0 for ; Wed, 08 Jan 2025 18:31:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736389863; x=1736994663; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=jMJU/3gn80Bp5VROzRR3pKDE/HRHDjLKawYxTpkzfvE=; b=apac9xdp17dy68uAhIwDYlJPuLeSVM7rMkat458yxgK3ya+0fbONyRqX9E3hLvgfKE 1rkmk9RLZtdU7hixcVds9x3cXh9t0Ny4haclzXS3jFPDBLBf4FurLVfN8SR1C8cT4UBc jE8ul07h8ayBBoiKlYD187A/i0io7UX7M8vOthw8Y9ZWO7SDMIAC4uKdTRjHgsAdsZO4 +p6uabdy7tE2KApkVYc+Nat+IB1Q70j/eLyHv6+7o8yItQ7TYn+MdIaFSR7FQwouNYi5 KffUyaxilg/yps/KWAS9z6MDchWlKcvvUZDzXqVtiC1OkM1oo2IrhGorz9GGvb8cmUFL q0gA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736389863; x=1736994663; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=jMJU/3gn80Bp5VROzRR3pKDE/HRHDjLKawYxTpkzfvE=; b=Bxvs0hrVcQcwmghO4JVI1NRgmmqEl074fEH1CSannUXdx6cnkE3Z4KBp6bswumoAFA GhnvOs7a4k17ygQMnLZcIn+/tfI5ApsyBbzEIq7oyYrSGe0D6Jp3KyRIHc2Wp1u2a4ub tu8ye4beXLtRl0fARJ2/zpVfzFeMy/H4FOM7vTUssKWDImLeJsLkMawunXXefQxHewlw UaoWjUjMw0tcnjXWSS1vF3ulSTAOugl2BE/ZLamYVBjRJDBpd1dx7c9sLIqvXQXC4ShW RZOVAO65vMs/MW7fc/J45x3Lxz+mxpBVZ7bXk084mSg5lZKFfuG9+p3Yq3ohDwd6fzGG Z/uA== X-Forwarded-Encrypted: i=1; AJvYcCU8jtjFs4JUubpMynJviw4v6VgJHtccbw1RLZmGtQfNze+ig5OgvvlX+9S2tPW62fSvtHgG1yDIQA==@kvack.org X-Gm-Message-State: AOJu0YyqNUp5ynREwr9V/Y92eWU7bjYVXORRl0ZEp08/mkACl6WzOi5I E3TYHUoWK735a7A3ci45d+hc8ZEC0JPJxGjmJ1S/+vUmTFSc1+SBPMGrwcFmwt33JAKQua0rpUm X3w== X-Google-Smtp-Source: AGHT+IGIq2BAYBw5AVU5HpNLwVIjRPdRRVq4DaaH9pXI9zGCBX20QcwARo1md41gddC/qz9xS8MYPY6uF8o= X-Received: from pgcz21.prod.google.com ([2002:a63:7e15:0:b0:7fd:50e9:aabf]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:f64f:b0:215:5240:bb3d with SMTP id d9443c01a7336-21a83fe4915mr80523945ad.42.1736389862977; Wed, 08 Jan 2025 18:31:02 -0800 (PST) Date: Wed, 8 Jan 2025 18:30:25 -0800 In-Reply-To: <20250109023025.2242447-1-surenb@google.com> Mime-Version: 1.0 References: <20250109023025.2242447-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109023025.2242447-17-surenb@google.com> Subject: [PATCH v8 16/16] docs/mm: document latest changes to vm_lock From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com, "Liam R. Howlett" X-Rspamd-Queue-Id: 2E86110000C X-Rspamd-Server: rspam12 X-Stat-Signature: ihw9g3x1dgxwgcutm4uss4ob9wa3wnyr X-Rspam-User: X-HE-Tag: 1736389864-757608 X-HE-Meta: U2FsdGVkX1/sL0pChYXXIwzNMGZg3p8TwCJLbFAe0tto8EtI6JlXR1FjHl9kJdbK+tVf/aiuyBE6dsUwIU2OiCp8qUhQfu6/mAjhWZOqiC3y4jQMhiix+SYauJAt/zGbdVX+7K1AJaXg4EsOwkz/bYVBWGvNgDo0cSA6iut3JQUnxL0GgzzFlRtqAOo+BjDOAj2YjDuj4CdbAP2YjlcdwLU/JncP7UHlqYacATgyh+sCX7Xmh5zKPlWhbPF+H843oqzzk/DpzJN7OKDBjUOkTrQlRrfSlAgbeXkoC8mP25hhbfkyHFhCxZ9sQ0zSKg1ySZyXpRbGLvEHLCAe9W/IiPI9ic/cOpOUzSUJT3RB12N+vsCtY4u1WQjXUaTnUwXPchEH+bGJ9bK0bFMUuY13aF1PeauBDPEft3wCozLQe66CTNiDCChWEouEGcrABllcxqJAT7iIUwXC83Xzn56raQUldYxS0yVa7J4Lk1a2epyzUV0q264qq6/ZuBCsZJAb+qXVK8EOj+XOaP2AaxCeqScMCpBcB3ud+z9P4sfAjiTDufk4XSeU1hY4F4Fh77qyDQipXeVxPumd5y+LNxCJZ10TDwrCMzBnoksQP+NNDNiG32T4dgmhn1c11H7lh0QabOV+QYztEB3dZw0BRQWu1yxKDRx45Bw36pnYejDdc1w05reOlx5H/agQirfBCNMndS8Fy33mpbpZiO7jnud4i/FoPj6NgVr2xHKE1B7JIkGXGDRjA2WP+MJTow3LlvWUKFZoTp5vSbU7yviHMCcl9fsxMzRkBInUfRojmzPhg596pGfZLI0FaD59b4FJuRQI3f3vEUeIlE8kLaA9IljHfMCzpUkj6Ng7bOE6ROvsAvZL17Iw4wMx3xMhIw7A97gFVE5S4P1/Ep4Y9iz7HosWBYAgaP8BT8GJH31rTgYtVKO0X7CTOQFLNdfi73XM3xL/Huvd84BxrjCSZTSI8dz 3MFfW0W/ JHHQrAzthZq3LArvATyGgleEbTpV0F5xKoytPh3M3TafgTeKAmYtZWOzroZLar8+M83kYxGnuL/qKyr16i6BNucO9odPTbGB3FI2fJhHBM++AnNLFaLZo19p8HyLd5hzUb7Wf6VQ6vDpUCmfteS/4N+IU96PSFNS7xe1cJZAwXEDP9rV7f/irwFvaHjXVrsVfQdfSROD1dnv0e6ufruJEDur59aRbL1w34vV0S21+FSx9g/ARahvP3aB58ceT2sfrUeEF5fRUAqNYhal2douuRrSsjo9VThFhHzmHSSEqxM4Hw9KlDSK6Ba2CPP/4YQix8E+V482g3MAMhCm53HB94Jgrvdw7wN+etm89H4DJy1rLMHkAtuwVStHogE1ucbRKDGo+JLmbqOD0egnBWCGOrBGRz6Bo4gnWMFdN7iuL5MWU8+criSeJ8YmaQ+PWbANvz8SCI6R85rE9y+6pBcYIaqUKZE/cIknOpSOsts6W9F1osZZ466IDsYh6ewYuRRr6QFVm5fa4QvPCS7LCAjFutFnIt+NaadprQQhN7vw6of8gRiVH9VNPpBb/KNxa1RNBIGF6CMv5OFIAgUA+YoPBOr0rwQCGt8qkSlHybJ/duSkDJjz8Jb1EydXYPuvoMfGIS9koEGxNShGBLTfXWQxQrEkEewKi4H3pSV9nuqlusNjGJlWoX27X7/YEWQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.001859, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Change the documentation to reflect that vm_lock is integrated into vma and replaced with vm_refcnt. Document newly introduced vma_start_read_locked{_nested} functions. Signed-off-by: Suren Baghdasaryan Reviewed-by: Liam R. Howlett --- Documentation/mm/process_addrs.rst | 44 ++++++++++++++++++------------ 1 file changed, 26 insertions(+), 18 deletions(-) diff --git a/Documentation/mm/process_addrs.rst b/Documentation/mm/process_addrs.rst index 81417fa2ed20..f573de936b5d 100644 --- a/Documentation/mm/process_addrs.rst +++ b/Documentation/mm/process_addrs.rst @@ -716,9 +716,14 @@ calls :c:func:`!rcu_read_lock` to ensure that the VMA is looked up in an RCU critical section, then attempts to VMA lock it via :c:func:`!vma_start_read`, before releasing the RCU lock via :c:func:`!rcu_read_unlock`. -VMA read locks hold the read lock on the :c:member:`!vma->vm_lock` semaphore for -their duration and the caller of :c:func:`!lock_vma_under_rcu` must release it -via :c:func:`!vma_end_read`. +In cases when the user already holds mmap read lock, :c:func:`!vma_start_read_locked` +and :c:func:`!vma_start_read_locked_nested` can be used. These functions do not +fail due to lock contention but the caller should still check their return values +in case they fail for other reasons. + +VMA read locks increment :c:member:`!vma.vm_refcnt` reference counter for their +duration and the caller of :c:func:`!lock_vma_under_rcu` must drop it via +:c:func:`!vma_end_read`. VMA **write** locks are acquired via :c:func:`!vma_start_write` in instances where a VMA is about to be modified, unlike :c:func:`!vma_start_read` the lock is always @@ -726,9 +731,9 @@ acquired. An mmap write lock **must** be held for the duration of the VMA write lock, releasing or downgrading the mmap write lock also releases the VMA write lock so there is no :c:func:`!vma_end_write` function. -Note that a semaphore write lock is not held across a VMA lock. Rather, a -sequence number is used for serialisation, and the write semaphore is only -acquired at the point of write lock to update this. +Note that when write-locking a VMA lock, the :c:member:`!vma.vm_refcnt` is temporarily +modified so that readers can detect the presense of a writer. The reference counter is +restored once the vma sequence number used for serialisation is updated. This ensures the semantics we require - VMA write locks provide exclusive write access to the VMA. @@ -738,7 +743,7 @@ Implementation details The VMA lock mechanism is designed to be a lightweight means of avoiding the use of the heavily contended mmap lock. It is implemented using a combination of a -read/write semaphore and sequence numbers belonging to the containing +reference counter and sequence numbers belonging to the containing :c:struct:`!struct mm_struct` and the VMA. Read locks are acquired via :c:func:`!vma_start_read`, which is an optimistic @@ -779,28 +784,31 @@ release of any VMA locks on its release makes sense, as you would never want to keep VMAs locked across entirely separate write operations. It also maintains correct lock ordering. -Each time a VMA read lock is acquired, we acquire a read lock on the -:c:member:`!vma->vm_lock` read/write semaphore and hold it, while checking that -the sequence count of the VMA does not match that of the mm. +Each time a VMA read lock is acquired, we increment :c:member:`!vma.vm_refcnt` +reference counter and check that the sequence count of the VMA does not match +that of the mm. -If it does, the read lock fails. If it does not, we hold the lock, excluding -writers, but permitting other readers, who will also obtain this lock under RCU. +If it does, the read lock fails and :c:member:`!vma.vm_refcnt` is dropped. +If it does not, we keep the reference counter raised, excluding writers, but +permitting other readers, who can also obtain this lock under RCU. Importantly, maple tree operations performed in :c:func:`!lock_vma_under_rcu` are also RCU safe, so the whole read lock operation is guaranteed to function correctly. -On the write side, we acquire a write lock on the :c:member:`!vma->vm_lock` -read/write semaphore, before setting the VMA's sequence number under this lock, -also simultaneously holding the mmap write lock. +On the write side, we set a bit in :c:member:`!vma.vm_refcnt` which can't be +modified by readers and wait for all readers to drop their reference count. +Once there are no readers, VMA's sequence number is set to match that of the +mm. During this entire operation mmap write lock is held. This way, if any read locks are in effect, :c:func:`!vma_start_write` will sleep until these are finished and mutual exclusion is achieved. -After setting the VMA's sequence number, the lock is released, avoiding -complexity with a long-term held write lock. +After setting the VMA's sequence number, the bit in :c:member:`!vma.vm_refcnt` +indicating a writer is cleared. From this point on, VMA's sequence number will +indicate VMA's write-locked state until mmap write lock is dropped or downgraded. -This clever combination of a read/write semaphore and sequence count allows for +This clever combination of a reference counter and sequence count allows for fast RCU-based per-VMA lock acquisition (especially on page fault, though utilised elsewhere) with minimal complexity around lock ordering.