From patchwork Wed Nov 20 00:08:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13880638 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E487DD6C2BA for ; Wed, 20 Nov 2024 00:08:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6AAEF6B0085; Tue, 19 Nov 2024 19:08:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 65CC26B0088; Tue, 19 Nov 2024 19:08:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 522A56B0089; Tue, 19 Nov 2024 19:08:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 3570B6B0085 for ; Tue, 19 Nov 2024 19:08:36 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id E0EFC1C6FD6 for ; Wed, 20 Nov 2024 00:08:35 +0000 (UTC) X-FDA: 82804535562.12.1D0F31C Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by imf07.hostedemail.com (Postfix) with ESMTP id 265B340007 for ; Wed, 20 Nov 2024 00:07:23 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=AOzvSOy8; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf07.hostedemail.com: domain of 3gCg9ZwYKCAcz1yluinvvnsl.jvtspu14-ttr2hjr.vyn@flex--surenb.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=3gCg9ZwYKCAcz1yluinvvnsl.jvtspu14-ttr2hjr.vyn@flex--surenb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732061247; a=rsa-sha256; cv=none; b=dW+4Pqn/nuP6IB77pw7d8dm7ZaV8qWu0KDF17tzazpzAlCmZ2PhzEBnDdtPuYa+N8/iTCt WJC04mn9YM4+LQN5Xe/jQXDtAW/We1jq1lZ82DnBeUI9zJ2kX87DnAkYOS8gclPhaQ45iJ bA/RGz4Bcizoz9cmmOqfpSX8VUkA4jw= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=AOzvSOy8; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf07.hostedemail.com: domain of 3gCg9ZwYKCAcz1yluinvvnsl.jvtspu14-ttr2hjr.vyn@flex--surenb.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=3gCg9ZwYKCAcz1yluinvvnsl.jvtspu14-ttr2hjr.vyn@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732061247; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Ve1WLVFYOSXc3Opo5MHJNWqWiXorXcfFJ5bxi7J2prs=; b=4NObeW3y7JsNHTpfc8XGxry3HFRQx89i0+50X47X0IQJeuerRKhj14kRHYQS4dVKXw+qXH JTnMInicqINBUeipJC/9ZmJA62ggYKlD607Iol4v5MlPqP80mNyCxqPiL4L30fuG2jRQZj mowmuoy8ArHGadjWZEJ+gYtvtyYYnP4= Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e3826e5c0c0so2797315276.0 for ; Tue, 19 Nov 2024 16:08:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732061313; x=1732666113; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Ve1WLVFYOSXc3Opo5MHJNWqWiXorXcfFJ5bxi7J2prs=; b=AOzvSOy8hkS2JHlDIR2I7ZGwxNI3djXYe7xbytLgEGd9P38Wlxj5j+IWRCa8peKmcW ZXzR6hInwLJPEQmLwSMh1BHRHLHCQmNrSdlo3zWz5rxNuf+ydb0V7xUwJWpwWGoV7il8 Rg9eUcv+i9OX5y26pJn8zb5mjvLwwdOkY1vqGh6lM+fiyUfK7NCflOmnQXEBjO01fguQ oeji9BM3yKstuKJfkbOm6GbKcAQ5v+82x4Rs6u40HMSJn5BXr8vYnNhb6ihLZcJgytBH q3b7P85GPF9/1A77iaVexgX6h7VX82ptxONHuILzk2f5X9h3SKv7lCjRhQcOrYuMxZ5O sfyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732061313; x=1732666113; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Ve1WLVFYOSXc3Opo5MHJNWqWiXorXcfFJ5bxi7J2prs=; b=BeCEIUj+bXdwOCfe7NBxW0ai22dxWG9erEgBjf7MLBZ1NjPWiKXycZlqyfXqb50Cp3 gmSss9PGgYdGb5V1AFuu10YDrfNtTbFhxS0yaWqYYPWlQlCkH57/JqIE4nfpo8KYljvi lPj/ixNCWg0v55OVBcdOxFwnCNCsDdrQsFLsbIaPftmou9VPtv5QuoLs4UYd3XPDLORP mKL+TFEhlkD6VzDu2n0MhljsQu7Dm3u+6C+MStRz4IenqrnFKCKISLKz+rIIxgGiBe1j jSlyoaz3Hu1leS/lqQ7+Dk7bP/E8bob0h7ZdyP0TON5R/CuGCKiCoNugJHVSz1SGbUse Iz8g== X-Forwarded-Encrypted: i=1; AJvYcCVNXt7Cy2pwHYQqzSm8e+tE+ajxHQ95SFP2guqMlkmrkmOT+XXPouH+Teh9w3iisrlspfOcAWNPug==@kvack.org X-Gm-Message-State: AOJu0Yyn3KYeoFsryQgXDyv+VIVVxiOidGvzY6JD367HUp7FKCxkeve3 PeaAZwuL5spRDX5KgF2YkToPZNsXLhmYjxI0TCYr957OFe05tb9sW40aWS/tsgWPKw2JfNmpOx/ Xnw== X-Google-Smtp-Source: AGHT+IEyLwXvDXLnYSqPv5azfkHnhcNjKGMYmPlWD8dNq1sLWn2jjgJONFBDd+BEsDvXmrivA7Y5d4vobs8= X-Received: from surenb-desktop.mtv.corp.google.com ([2a00:79e0:2e3f:8:af45:bbc1:fc75:e695]) (user=surenb job=sendgmr) by 2002:a25:aa22:0:b0:e2e:3401:ea0f with SMTP id 3f1490d57ef6-e38cb70b565mr451276.7.1732061312659; Tue, 19 Nov 2024 16:08:32 -0800 (PST) Date: Tue, 19 Nov 2024 16:08:22 -0800 In-Reply-To: <20241120000826.335387-1-surenb@google.com> Mime-Version: 1.0 References: <20241120000826.335387-1-surenb@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241120000826.335387-2-surenb@google.com> Subject: [PATCH v4 1/5] mm: introduce vma_start_read_locked{_nested} helpers From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Rspam-User: X-Rspamd-Queue-Id: 265B340007 X-Rspamd-Server: rspam01 X-Stat-Signature: asud863wqdx9ncp935x4fisdi5i3ppxj X-HE-Tag: 1732061243-335871 X-HE-Meta: U2FsdGVkX1/6LcCJ7hqbH5YXyH7bEobe5cmFVG+H8EDyXy/k4L+5Js7F0QhPp8FZeVeB5rwdtjNo7C1xL8/0MV3mpAvaPx9S6z02O7neouwmG45d99hI0gPUYFBBXdIyJGoK3zi4txTYTMh0HXyiOXeu0ZwnwW2DdaT1pXd02L792cZcOW5fPMtFKNSsgAc27meH5qT27eOi+tpfZbVHape+HE2794+xhpXI3ZoJs4idPArzfUfZ3F4yXZ/iordXFzkOt3KPcrrHiemivB+BcPr7X9s/g0CV/xk2Y7rfbJCv4nm7rXuqByVty2nnLCdmqEfc09EM22y2TxpnOZ+31Wa6Ldgph9DKj1Z/ZsHt6Bnwc3/DovucBji7OrN9Igl6Rir69Ep/L003VxL6O8DU14do0fSk+ePX/Jn0JiNEzfLOCR4QJU8w9vW5iDN3C9Wrev+zzC47LhcFTVSa/+Qf+Fhrcl+SLLbUqO6XiWK+aXLSVYtv02zYXQx1M0E9i4AyI+Id9hmgXLGf5adMZBmyYW/oR3303kU8fwci8T5nsdMBsxybx2QccfRfjE+QZ9H6TDxKrN6tnoruPIT74DsmMCpKCTXBRrg7ROOCCH9+Nnh4LkuL2sjNppLdc/zgoA5R+GLS6inkTa6Xm+m02fB0lQL7CE797HzM9wf+ZJAmGmA345D18aE1YmL4FxBtcN0Hc/eLdYzO9XGE1PCFqWDIQgxtzCrUSzcWBPZ2QeFs0nEv6mEVI0OprhU2PnrBzTbNwC6lZoxYA2YFi8WbQ+Lw9KQvAaFSPW32EM+08IWrp1ExKOs21oURYorkZEWacsnZYFlutOsecrLALYCRomshtuY0Ya7by5bOBFEFsa5XeDSdMVdWNqkCar3mk5GGndIcxZ54V0pVdaN3IQQRqIvVpd5j7eNtET0vhh52R0yHXS1WgvHIB8WdGLdGW1Pys8O0zRYAHmpDw81FCTGfyrl NQSHg0Ke DPmFu/8DUWnU4hFDCBVQ/3Fu4kBteO7JKeex+lmESuzz/LJ/HPUsigzY3EytOOHu/LLm2EIJANANHulfUwA8zfdiTj1ZkVZOpGA5G3hlpGr++xvbJhH1T8+SUwJOxkUeKbqA+EtG5NKoHNKU4Sdxr6nlQvP1rejgjlCuTPB7nDb1NkNl1aNlYhr4cr78ZP56AV2Wb01QyxYFjr8SwBadBA0WCWF2OaErp5gfRn/LDYAHW2cvIL29BnnfsRLYwYv56Ky7CX65CVkpSpOLwlTBRL42Z9VGuuAdbqY5E+t9oad/OKAfD6/GP/AZygwpRU5Ol2mwOUCit29OxDi65WK62jGq8nAF5y3RBQl8fla5llG+Zt9IVG/4++0U/chXD5529uZqo3Z4JzY3eJ9CmAykviE/9Xi6QiDp7xTOvXWZfcNpZuA5qbmQ5rLvkwzKyudQuFNJyp7TvbnLCwZwozVVyvNgJM3ydG1swg0MZyweFsQIPR25RRTX7XurPUDlgjhIziW1az7nnzMIHc0pNk3eS+VBMXPCpW4JCEQ2HHW8hW/XExJMvI2me1rsHmB++xy47IVHgvSIeNnci+LmixBe4G70oYBTvfiPzIZo9bxN6SP4sGLvaOTJXaEhpFeGp+T4+E2s8qahEEF6/BojNKPaAMr4euFq82u5TW77tymx5kh98e1o= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Introduce helper functions which can be used to read-lock a VMA when holding mmap_lock for read. Replace direct accesses to vma->vm_lock with these new helpers. Signed-off-by: Suren Baghdasaryan Reviewed-by: Lorenzo Stoakes Reviewed-by: Davidlohr Bueso Reviewed-by: Shakeel Butt --- include/linux/mm.h | 24 ++++++++++++++++++++++++ mm/userfaultfd.c | 22 +++++----------------- 2 files changed, 29 insertions(+), 17 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index fecd47239fa9..1ba2e480ae63 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -722,6 +722,30 @@ static inline bool vma_start_read(struct vm_area_struct *vma) return true; } +/* + * Use only while holding mmap read lock which guarantees that locking will not + * fail (nobody can concurrently write-lock the vma). vma_start_read() should + * not be used in such cases because it might fail due to mm_lock_seq overflow. + * This functionality is used to obtain vma read lock and drop the mmap read lock. + */ +static inline void vma_start_read_locked_nested(struct vm_area_struct *vma, int subclass) +{ + mmap_assert_locked(vma->vm_mm); + down_read_nested(&vma->vm_lock->lock, subclass); +} + +/* + * Use only while holding mmap read lock which guarantees that locking will not + * fail (nobody can concurrently write-lock the vma). vma_start_read() should + * not be used in such cases because it might fail due to mm_lock_seq overflow. + * This functionality is used to obtain vma read lock and drop the mmap read lock. + */ +static inline void vma_start_read_locked(struct vm_area_struct *vma) +{ + mmap_assert_locked(vma->vm_mm); + down_read(&vma->vm_lock->lock); +} + static inline void vma_end_read(struct vm_area_struct *vma) { rcu_read_lock(); /* keeps vma alive till the end of up_read */ diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 60a0be33766f..87db4b32b82a 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -84,16 +84,8 @@ static struct vm_area_struct *uffd_lock_vma(struct mm_struct *mm, mmap_read_lock(mm); vma = find_vma_and_prepare_anon(mm, address); - if (!IS_ERR(vma)) { - /* - * We cannot use vma_start_read() as it may fail due to - * false locked (see comment in vma_start_read()). We - * can avoid that by directly locking vm_lock under - * mmap_lock, which guarantees that nobody can lock the - * vma for write (vma_start_write()) under us. - */ - down_read(&vma->vm_lock->lock); - } + if (!IS_ERR(vma)) + vma_start_read_locked(vma); mmap_read_unlock(mm); return vma; @@ -1476,14 +1468,10 @@ static int uffd_move_lock(struct mm_struct *mm, mmap_read_lock(mm); err = find_vmas_mm_locked(mm, dst_start, src_start, dst_vmap, src_vmap); if (!err) { - /* - * See comment in uffd_lock_vma() as to why not using - * vma_start_read() here. - */ - down_read(&(*dst_vmap)->vm_lock->lock); + vma_start_read_locked(*dst_vmap); if (*dst_vmap != *src_vmap) - down_read_nested(&(*src_vmap)->vm_lock->lock, - SINGLE_DEPTH_NESTING); + vma_start_read_locked_nested(*src_vmap, + SINGLE_DEPTH_NESTING); } mmap_read_unlock(mm); return err; From patchwork Wed Nov 20 00:08:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13880639 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32542D6C2B8 for ; Wed, 20 Nov 2024 00:08:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B40806B0089; Tue, 19 Nov 2024 19:08:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AC7C76B008A; Tue, 19 Nov 2024 19:08:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8F6EF6B008C; Tue, 19 Nov 2024 19:08:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 6CDED6B0089 for ; Tue, 19 Nov 2024 19:08:38 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 11E7AC018D for ; Wed, 20 Nov 2024 00:08:38 +0000 (UTC) X-FDA: 82804536612.14.0C5EA4D Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) by imf13.hostedemail.com (Postfix) with ESMTP id 680452000C for ; Wed, 20 Nov 2024 00:07:42 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="p/vEbsZ+"; spf=pass (imf13.hostedemail.com: domain of 3gyg9ZwYKCAo241oxlqyyqvo.mywvsx47-wwu5kmu.y1q@flex--surenb.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3gyg9ZwYKCAo241oxlqyyqvo.mywvsx47-wwu5kmu.y1q@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732061255; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=SAziw9WSyfucuyGMOXxhGKNJRHbYSYaYgevUTnXgAr0=; b=KxhQ9KTB1r03VGxjTPdrvPXCuOth0NJkp3BZ7amOpGbtBwI2zHX4zTRx2DxJwyMCKiwOek GuZaEPHSbkLw8/+N5aKJLKjjeJx4G1sYR39aYtxO5aLn930YH1kTEfmmr28aKyQ/NucuAc NqyovobuwYa+FUGP4ZUpNRZj5n9UB2c= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732061255; a=rsa-sha256; cv=none; b=S7ZSCoS2FyQwM/acxJHTP/LPZmtf1CRySwX5XLLRd1NwIoJwUVnIhRgheBK7cuW8gNjvhi znUEFKAJ2kNavVvgFlNr9ne7Hb+geDdcQWrk1Yje1bi8uyE+AUZvfIn0OV6CzSqXb1eQZU /iVZLfjMF2b3KAwVfQH18fLUvGSkito= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="p/vEbsZ+"; spf=pass (imf13.hostedemail.com: domain of 3gyg9ZwYKCAo241oxlqyyqvo.mywvsx47-wwu5kmu.y1q@flex--surenb.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3gyg9ZwYKCAo241oxlqyyqvo.mywvsx47-wwu5kmu.y1q@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6ee7ccc0286so54270697b3.2 for ; Tue, 19 Nov 2024 16:08:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732061315; x=1732666115; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=SAziw9WSyfucuyGMOXxhGKNJRHbYSYaYgevUTnXgAr0=; b=p/vEbsZ+pkfKZ3hrb2DWPgGS+KJZ5AJ/Z+96n8SaivJoP6TnNP26oSd8CzHuXAsSsl 9kXt9jYIu+ITZiDYEh9X6Uc8c3emLeIBX1nf162oXxfxUYML3JkZdT6kZSdc1VaQlihn HwG7bWKO82CuyydbxuLjEuDdTUzAjQvXWzBtif3RN17YFWq2oLgsfzUehDYM3ln3ikFg m3U30LxU2YyJUbnKeG4bcPRBsbcYMrOWfDnol18c1bEZ00Mxd1NOVfOXgWAbPh6sdhET asyke+ihCim6CMiwtGpOkzaqP6rpUF2lyLJajwrISBgDBjeIUqeXgJNmOAvcAWO4zN1P xB3g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732061315; x=1732666115; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=SAziw9WSyfucuyGMOXxhGKNJRHbYSYaYgevUTnXgAr0=; b=uLXisKgK/M8kbSupWxvd5ae/Tdy5MEoc2Rmw/wiqnKDwWBQfp/P4zGJyNxDXSX/TFI 3l494XJASczpZZ/+e5QUujhuVdSqWOGEB1w/PgqDxYJnfcWwJtDsUY8lC6HR9zYpSVXr AH1UOf3re35gikMX6Yw5AtLzCpproVHn2PrXQsbvJQzgyjfYzWAVLdIfincwL12aQFj7 LMa9Jtr0tvKBfaECUREGD5vT4YimmPasaMUcdl3kYcdrONB/rjub5DNC4+PD2k86oRdW PDcgxHqHy7xG5lfKauAk16ESMXXSDATsKT8EdAy/V/VlhwM4TXYEuUanc1+jaHwJ3rkR Eo+w== X-Forwarded-Encrypted: i=1; AJvYcCVFTEsCtdEZKvP9Wghd4sTSaJpYwgvxaCdUC23HcWnpjyGhsQc4IfRTeaf5vrVuaxh6o9uqXhbELQ==@kvack.org X-Gm-Message-State: AOJu0YwrdIqk/mYG52gQfac93PBYX7wRNLZVSLt9KEzg/SzWY31PjWKR 4pBsFeVYswTXi2A5u6F7fyAet42gWCQw1JFLAoRLcNQjxtkGEfw+8SUulfwCOY8I7WTseIRucrQ mBQ== X-Google-Smtp-Source: AGHT+IFLT4jYBs99DKrHbDQ7ZGcoHlaQ/f9ZNKPO2rEer0ezgVKwkvp1ZV9dvMBREnqovNlsd6vvhAEH4wE= X-Received: from surenb-desktop.mtv.corp.google.com ([2a00:79e0:2e3f:8:af45:bbc1:fc75:e695]) (user=surenb job=sendgmr) by 2002:a05:690c:3301:b0:6db:b2ed:7625 with SMTP id 00721157ae682-6eebcfba548mr28807b3.0.1732061315061; Tue, 19 Nov 2024 16:08:35 -0800 (PST) Date: Tue, 19 Nov 2024 16:08:23 -0800 In-Reply-To: <20241120000826.335387-1-surenb@google.com> Mime-Version: 1.0 References: <20241120000826.335387-1-surenb@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241120000826.335387-3-surenb@google.com> Subject: [PATCH v4 2/5] mm: move per-vma lock into vm_area_struct From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Stat-Signature: 91dffc1bnrf73zwpw5tr31rg88x8y4h7 X-Rspam-User: X-Rspamd-Queue-Id: 680452000C X-Rspamd-Server: rspam02 X-HE-Tag: 1732061262-407331 X-HE-Meta: U2FsdGVkX1+U5PotNVEcn/nKjRxKv9vmGs/FMhiARn2Q8fOBL3A4qGXeUjpVtzN21dJWKeorXhFhr5s/sRO7fHAB+a2znlem0g4F+nl/nz8FWyESiYNllJ7OPJmEhJjmGy2fA2iFP1zQtPEc7CGs1wE9fFi0+Gy6S+fcYYZitNVNOBBHfXiQRmIeEJN2ujCEtjlER7nYkKOxb3Ool8f+eOFOFVlcvEPUtvM6lDIVeSN110SWiYUKQTLs41jZmNS81GnAoT+VUcFaPu/wlc6zaMeeGGgbC1S9K2lB+s19SypzQkRCF3DOmbqRemVCQLnxZ4QNpM6Ffd9oTLm5WGTaegC1Zuf34irG39JxuDbYtp/r75rT+YevTZuc6eUo66MwnrHGSROl8vdl6PqUBUy49l8d2JVvS16b2uSe5pS9cjqxvswlLIfdHUYSwFF4pjGzbFBjXxiG5pJUiSdmgCSQppcQLM1JFzUMLnMAVFfxkCfhrmlKkVB9iuZK1ldD6qdaPHYIGqF4o64a/UpbGEWFOTqUOvy0Iv5hnC4Nh9RWqn5oACfzPvfmleyZ1xBz0x8cs2BFQTZ/9+evDZb01x5zebTarNPfnoBI+i0HcbqS6tZeNlO4eiyYoHx90KhflOG4PqfXg9WA4cTqV4wyrvbGHI+61xlpXHPsBg5kC8R6NgHTb7HuTJvYEB6Ia98lZ10DrEvr+wZe/RX2bSPkuKVulAwGrP6AaB0mVHPRVG6DL3/Pw+RFa/y+TO2ySvJGWR185oRh+ZZaqYrzIkt9/7nzSD0VCCXmuHQBQ7LBoHxrkb8ehYj3zkVq5ja4YBWetBz71OETEhTuy40mY9va1JfAbPNB1CP7c/N/p9kIN0gwcfSttNiOLWk5LV47dakdcGK9hDNq/x2AeHEm26mFar7u4gm7c3lioXSaL7ID8Plrs897zgJeUZZWgq+WzfoPA1zfAQc7esNWaNTkqzA+9gD 5Y8GEN/G RG7DsCFQGeG2evrP71cfxlCvh4rdZmRRCpZTKPS+7yiBDPAkEVzrGHTacITNcGkwG4fy0neliAYhX/aTJJokBe/hKidZgI4RZnmoQ7uZ2XmMeTJKd36LMtsQmyZ+CbVcRToXzhgkbRDsEBUddusqNRc0zTXgIjvhXiXEoSpG+nO87TLz5cy/bq7fZ0DdC6t/s3mKR0sbBibglxVP6U5sJ6Mj9jd7A2p9z20MunsKPXLclXVngW/PdOodQ0kFd0G0PpubZxVy3Vfa5lfJCRpq8wGQXIIY5KLiJdMGv+gH3o+RpgmQ2Zc1UtBXcBFXYPbhWtbGHg9l/GmrwCvAbn4FX/i/zzSUpXhZsQhAkZuQPzt7bv+4YZ/R0+DpDIBidUtKH4Lm6aGu9bYrR6EGtt5QCjOXfEeTFXqRAA6ApiwVLpkhcyIsI4V9gHSi9ncyKF7p4TyibCsqKty9fhM7dye7UUi62LK15hs/zr+7xsH5ixNwyMrmrYlfh29NB2r7FO37WSX3ZDum7JxQbXQ9fuQ5x1tXBiRC1mAl98OqIbUay7CHWNGzREgo//2Fo4eQE48Es8htCMCvueGc8/dgM7hraRlegwFva5uwgHMPlO/2vZUJBTsoaHB7WUZO0LjWQR0ulnMOwX4FtV2ZdZn2hBjr4HGPUHcT68vLqM/6W X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Back when per-vma locks were introduces, vm_lock was moved out of vm_area_struct in [1] because of the performance regression caused by false cacheline sharing. Recent investigation [2] revealed that the regressions is limited to a rather old Broadwell microarchitecture and even there it can be mitigated by disabling adjacent cacheline prefetching, see [3]. Splitting single logical structure into multiple ones leads to more complicated management, extra pointer dereferences and overall less maintainable code. When that split-away part is a lock, it complicates things even further. With no performance benefits, there are no reasons for this split. Merging the vm_lock back into vm_area_struct also allows vm_area_struct to use SLAB_TYPESAFE_BY_RCU later in this patchset. Move vm_lock back into vm_area_struct, aligning it at the cacheline boundary and changing the cache to be cacheline-aligned as well. With kernel compiled using defconfig, this causes VMA memory consumption to grow from 160 (vm_area_struct) + 40 (vm_lock) bytes to 256 bytes: slabinfo before: ... : ... vma_lock ... 40 102 1 : ... vm_area_struct ... 160 51 2 : ... slabinfo after moving vm_lock: ... : ... vm_area_struct ... 256 32 2 : ... Aggregate VMA memory consumption per 1000 VMAs grows from 50 to 64 pages, which is 5.5MB per 100000 VMAs. Note that the size of this structure is dependent on the kernel configuration and typically the original size is higher than 160 bytes. Therefore these calculations are close to the worst case scenario. A more realistic vm_area_struct usage before this change is: ... : ... vma_lock ... 40 102 1 : ... vm_area_struct ... 176 46 2 : ... Aggregate VMA memory consumption per 1000 VMAs grows from 54 to 64 pages, which is 3.9MB per 100000 VMAs. This memory consumption growth can be addressed later by optimizing the vm_lock. [1] https://lore.kernel.org/all/20230227173632.3292573-34-surenb@google.com/ [2] https://lore.kernel.org/all/ZsQyI%2F087V34JoIt@xsang-OptiPlex-9020/ [3] https://lore.kernel.org/all/CAJuCfpEisU8Lfe96AYJDZ+OM4NoPmnw9bP53cT_kbfP_pR+-2g@mail.gmail.com/ Signed-off-by: Suren Baghdasaryan Reviewed-by: Lorenzo Stoakes Reviewed-by: Shakeel Butt --- include/linux/mm.h | 28 ++++++++++-------- include/linux/mm_types.h | 6 ++-- kernel/fork.c | 49 ++++---------------------------- tools/testing/vma/vma_internal.h | 33 +++++---------------- 4 files changed, 32 insertions(+), 84 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 1ba2e480ae63..737c003b0a1e 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -684,6 +684,12 @@ static inline void vma_numab_state_free(struct vm_area_struct *vma) {} #endif /* CONFIG_NUMA_BALANCING */ #ifdef CONFIG_PER_VMA_LOCK +static inline void vma_lock_init(struct vm_area_struct *vma) +{ + init_rwsem(&vma->vm_lock.lock); + vma->vm_lock_seq = UINT_MAX; +} + /* * Try to read-lock a vma. The function is allowed to occasionally yield false * locked result to avoid performance overhead, in which case we fall back to @@ -701,7 +707,7 @@ static inline bool vma_start_read(struct vm_area_struct *vma) if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(vma->vm_mm->mm_lock_seq.sequence)) return false; - if (unlikely(down_read_trylock(&vma->vm_lock->lock) == 0)) + if (unlikely(down_read_trylock(&vma->vm_lock.lock) == 0)) return false; /* @@ -716,7 +722,7 @@ static inline bool vma_start_read(struct vm_area_struct *vma) * This pairs with RELEASE semantics in vma_end_write_all(). */ if (unlikely(vma->vm_lock_seq == raw_read_seqcount(&vma->vm_mm->mm_lock_seq))) { - up_read(&vma->vm_lock->lock); + up_read(&vma->vm_lock.lock); return false; } return true; @@ -731,7 +737,7 @@ static inline bool vma_start_read(struct vm_area_struct *vma) static inline void vma_start_read_locked_nested(struct vm_area_struct *vma, int subclass) { mmap_assert_locked(vma->vm_mm); - down_read_nested(&vma->vm_lock->lock, subclass); + down_read_nested(&vma->vm_lock.lock, subclass); } /* @@ -743,13 +749,13 @@ static inline void vma_start_read_locked_nested(struct vm_area_struct *vma, int static inline void vma_start_read_locked(struct vm_area_struct *vma) { mmap_assert_locked(vma->vm_mm); - down_read(&vma->vm_lock->lock); + down_read(&vma->vm_lock.lock); } static inline void vma_end_read(struct vm_area_struct *vma) { rcu_read_lock(); /* keeps vma alive till the end of up_read */ - up_read(&vma->vm_lock->lock); + up_read(&vma->vm_lock.lock); rcu_read_unlock(); } @@ -778,7 +784,7 @@ static inline void vma_start_write(struct vm_area_struct *vma) if (__is_vma_write_locked(vma, &mm_lock_seq)) return; - down_write(&vma->vm_lock->lock); + down_write(&vma->vm_lock.lock); /* * We should use WRITE_ONCE() here because we can have concurrent reads * from the early lockless pessimistic check in vma_start_read(). @@ -786,7 +792,7 @@ static inline void vma_start_write(struct vm_area_struct *vma) * we should use WRITE_ONCE() for cleanliness and to keep KCSAN happy. */ WRITE_ONCE(vma->vm_lock_seq, mm_lock_seq); - up_write(&vma->vm_lock->lock); + up_write(&vma->vm_lock.lock); } static inline void vma_assert_write_locked(struct vm_area_struct *vma) @@ -798,7 +804,7 @@ static inline void vma_assert_write_locked(struct vm_area_struct *vma) static inline void vma_assert_locked(struct vm_area_struct *vma) { - if (!rwsem_is_locked(&vma->vm_lock->lock)) + if (!rwsem_is_locked(&vma->vm_lock.lock)) vma_assert_write_locked(vma); } @@ -831,6 +837,7 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, #else /* CONFIG_PER_VMA_LOCK */ +static inline void vma_lock_init(struct vm_area_struct *vma) {} static inline bool vma_start_read(struct vm_area_struct *vma) { return false; } static inline void vma_end_read(struct vm_area_struct *vma) {} @@ -865,10 +872,6 @@ static inline void assert_fault_locked(struct vm_fault *vmf) extern const struct vm_operations_struct vma_dummy_vm_ops; -/* - * WARNING: vma_init does not initialize vma->vm_lock. - * Use vm_area_alloc()/vm_area_free() if vma needs locking. - */ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) { memset(vma, 0, sizeof(*vma)); @@ -877,6 +880,7 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) INIT_LIST_HEAD(&vma->anon_vma_chain); vma_mark_detached(vma, false); vma_numab_state_init(vma); + vma_lock_init(vma); } /* Use when VMA is not part of the VMA tree and needs no locking */ diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 80fef38d9d64..5c4bfdcfac72 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -716,8 +716,6 @@ struct vm_area_struct { * slowpath. */ unsigned int vm_lock_seq; - /* Unstable RCU readers are allowed to read this. */ - struct vma_lock *vm_lock; #endif /* @@ -770,6 +768,10 @@ struct vm_area_struct { struct vma_numab_state *numab_state; /* NUMA Balancing state */ #endif struct vm_userfaultfd_ctx vm_userfaultfd_ctx; +#ifdef CONFIG_PER_VMA_LOCK + /* Unstable RCU readers are allowed to read this. */ + struct vma_lock vm_lock ____cacheline_aligned_in_smp; +#endif } __randomize_layout; #ifdef CONFIG_NUMA diff --git a/kernel/fork.c b/kernel/fork.c index 0061cf2450ef..7823797e31d2 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -436,35 +436,6 @@ static struct kmem_cache *vm_area_cachep; /* SLAB cache for mm_struct structures (tsk->mm) */ static struct kmem_cache *mm_cachep; -#ifdef CONFIG_PER_VMA_LOCK - -/* SLAB cache for vm_area_struct.lock */ -static struct kmem_cache *vma_lock_cachep; - -static bool vma_lock_alloc(struct vm_area_struct *vma) -{ - vma->vm_lock = kmem_cache_alloc(vma_lock_cachep, GFP_KERNEL); - if (!vma->vm_lock) - return false; - - init_rwsem(&vma->vm_lock->lock); - vma->vm_lock_seq = UINT_MAX; - - return true; -} - -static inline void vma_lock_free(struct vm_area_struct *vma) -{ - kmem_cache_free(vma_lock_cachep, vma->vm_lock); -} - -#else /* CONFIG_PER_VMA_LOCK */ - -static inline bool vma_lock_alloc(struct vm_area_struct *vma) { return true; } -static inline void vma_lock_free(struct vm_area_struct *vma) {} - -#endif /* CONFIG_PER_VMA_LOCK */ - struct vm_area_struct *vm_area_alloc(struct mm_struct *mm) { struct vm_area_struct *vma; @@ -474,10 +445,6 @@ struct vm_area_struct *vm_area_alloc(struct mm_struct *mm) return NULL; vma_init(vma, mm); - if (!vma_lock_alloc(vma)) { - kmem_cache_free(vm_area_cachep, vma); - return NULL; - } return vma; } @@ -496,10 +463,7 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) * will be reinitialized. */ data_race(memcpy(new, orig, sizeof(*new))); - if (!vma_lock_alloc(new)) { - kmem_cache_free(vm_area_cachep, new); - return NULL; - } + vma_lock_init(new); INIT_LIST_HEAD(&new->anon_vma_chain); vma_numab_state_init(new); dup_anon_vma_name(orig, new); @@ -511,7 +475,6 @@ void __vm_area_free(struct vm_area_struct *vma) { vma_numab_state_free(vma); free_anon_vma_name(vma); - vma_lock_free(vma); kmem_cache_free(vm_area_cachep, vma); } @@ -522,7 +485,7 @@ static void vm_area_free_rcu_cb(struct rcu_head *head) vm_rcu); /* The vma should not be locked while being destroyed. */ - VM_BUG_ON_VMA(rwsem_is_locked(&vma->vm_lock->lock), vma); + VM_BUG_ON_VMA(rwsem_is_locked(&vma->vm_lock.lock), vma); __vm_area_free(vma); } #endif @@ -3168,11 +3131,9 @@ void __init proc_caches_init(void) sizeof(struct fs_struct), 0, SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT, NULL); - - vm_area_cachep = KMEM_CACHE(vm_area_struct, SLAB_PANIC|SLAB_ACCOUNT); -#ifdef CONFIG_PER_VMA_LOCK - vma_lock_cachep = KMEM_CACHE(vma_lock, SLAB_PANIC|SLAB_ACCOUNT); -#endif + vm_area_cachep = KMEM_CACHE(vm_area_struct, + SLAB_HWCACHE_ALIGN|SLAB_NO_MERGE|SLAB_PANIC| + SLAB_ACCOUNT); mmap_init(); nsproxy_cache_init(); } diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_internal.h index 1d9fc97b8e80..11c2c38ca4e8 100644 --- a/tools/testing/vma/vma_internal.h +++ b/tools/testing/vma/vma_internal.h @@ -230,10 +230,10 @@ struct vm_area_struct { /* * Can only be written (using WRITE_ONCE()) while holding both: * - mmap_lock (in write mode) - * - vm_lock->lock (in write mode) + * - vm_lock.lock (in write mode) * Can be read reliably while holding one of: * - mmap_lock (in read or write mode) - * - vm_lock->lock (in read or write mode) + * - vm_lock.lock (in read or write mode) * Can be read unreliably (using READ_ONCE()) for pessimistic bailout * while holding nothing (except RCU to keep the VMA struct allocated). * @@ -242,7 +242,7 @@ struct vm_area_struct { * slowpath. */ unsigned int vm_lock_seq; - struct vma_lock *vm_lock; + struct vma_lock vm_lock; #endif /* @@ -408,17 +408,10 @@ static inline struct vm_area_struct *vma_next(struct vma_iterator *vmi) return mas_find(&vmi->mas, ULONG_MAX); } -static inline bool vma_lock_alloc(struct vm_area_struct *vma) +static inline void vma_lock_init(struct vm_area_struct *vma) { - vma->vm_lock = calloc(1, sizeof(struct vma_lock)); - - if (!vma->vm_lock) - return false; - - init_rwsem(&vma->vm_lock->lock); + init_rwsem(&vma->vm_lock.lock); vma->vm_lock_seq = UINT_MAX; - - return true; } static inline void vma_assert_write_locked(struct vm_area_struct *); @@ -439,6 +432,7 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) vma->vm_ops = &vma_dummy_vm_ops; INIT_LIST_HEAD(&vma->anon_vma_chain); vma_mark_detached(vma, false); + vma_lock_init(vma); } static inline struct vm_area_struct *vm_area_alloc(struct mm_struct *mm) @@ -449,10 +443,6 @@ static inline struct vm_area_struct *vm_area_alloc(struct mm_struct *mm) return NULL; vma_init(vma, mm); - if (!vma_lock_alloc(vma)) { - free(vma); - return NULL; - } return vma; } @@ -465,10 +455,7 @@ static inline struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) return NULL; memcpy(new, orig, sizeof(*new)); - if (!vma_lock_alloc(new)) { - free(new); - return NULL; - } + vma_lock_init(new); INIT_LIST_HEAD(&new->anon_vma_chain); return new; @@ -638,14 +625,8 @@ static inline void mpol_put(struct mempolicy *) { } -static inline void vma_lock_free(struct vm_area_struct *vma) -{ - free(vma->vm_lock); -} - static inline void __vm_area_free(struct vm_area_struct *vma) { - vma_lock_free(vma); free(vma); } From patchwork Wed Nov 20 00:08:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13880640 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB778D6C2B9 for ; Wed, 20 Nov 2024 00:08:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 796FD6B008C; Tue, 19 Nov 2024 19:08:41 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 721486B0092; Tue, 19 Nov 2024 19:08:41 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 59A0B6B0093; Tue, 19 Nov 2024 19:08:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 3BD716B008C for ; Tue, 19 Nov 2024 19:08:41 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id B038640356 for ; Wed, 20 Nov 2024 00:08:40 +0000 (UTC) X-FDA: 82804535646.26.BE96734 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) by imf17.hostedemail.com (Postfix) with ESMTP id D743D40017 for ; Wed, 20 Nov 2024 00:07:59 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=xf5vlbBA; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf17.hostedemail.com: domain of 3hSg9ZwYKCAw463qzns00sxq.o0yxuz69-yyw7mow.03s@flex--surenb.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3hSg9ZwYKCAw463qzns00sxq.o0yxuz69-yyw7mow.03s@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732061227; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=b7bh6sSGESOsY9pdAptx272UcEa/Ygad7puDmCEq6Kg=; b=QAVl/GdYxueic8EKQSSnNOvDke1XOleLx85nFxD9YgX+cZ5qO1aavhHUQK+EwyE8XBZe7l ZOajZDQoA+xUT7Z+PkA1At0HKC26kv3z2xefLfmBl0tLIwQd+sEXYl7j7hC18MtH5ib0KA qlIls9jbXNPuWGVlS7l8JDC/oWhyFTc= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=xf5vlbBA; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf17.hostedemail.com: domain of 3hSg9ZwYKCAw463qzns00sxq.o0yxuz69-yyw7mow.03s@flex--surenb.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3hSg9ZwYKCAw463qzns00sxq.o0yxuz69-yyw7mow.03s@flex--surenb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732061227; a=rsa-sha256; cv=none; b=XdLKlfGe6nWHWo7lvMKBo9xNNZdjUCBCgCwcPz3X++kgrTKoKC4gIzz6QP9FtEQ6OL0Ta3 KfKByHF/Z8z+fCk4XrKQuhrBnL5uLONXpv1cABDeWMTtQflY5H1bvd9lQkNW3LDdf8kvPl 64i/UJnDzJrDoQA0pichPnQJvRW5zEs= Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-6ea8794f354so56591017b3.2 for ; Tue, 19 Nov 2024 16:08:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732061318; x=1732666118; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=b7bh6sSGESOsY9pdAptx272UcEa/Ygad7puDmCEq6Kg=; b=xf5vlbBAU8JnTEDqnb0YWCZRqIBHh4X0ADPu5+IqWqU/fjxW0Lsh57kulnkamJXq+C PoaMqnVaP/dLEYh4zaWG5nN7hdtXjyA0+T2XcP9QvjUaSnOPRf7+luoJXM9WcQsEZLB8 whbvserj/hKcksA4KpzZLQ52+ad23+fRYSt8lvsEN00wVX4Py/N3oxdhUXvJsL4m9K9v Izrd5FjK1Kxq5EQkiYAEWH7ZTCM5uKRuVpnnb0BCG4lgloYZlIzDAaJniz4zgYFkCjGK ziqoWph56fY1vwJJdMpEp6kAhzaw35rwZrVthPPmTlEiLi54X677m+/F3WInE1Uw45ao Vh2A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732061318; x=1732666118; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=b7bh6sSGESOsY9pdAptx272UcEa/Ygad7puDmCEq6Kg=; b=qC5MdBsYs73esS8CaBQlaGsv7T4L2MJeCREKZTlwQoFbZEKALQOsYR298E8wCKWDNp RTo1JTxJsVSQUynm0RuC8PDqTCdUNFD3eBtYH1DeIAn9njNBWmYqU3v6O+LA4NAMHENE ICYbr1+N9VLPxUpyxtIjopmhUu0FzhftnzxUX99f+bTTjfevMqzMhnH3KkzIMijdFro9 7L895uPGPIPM7IuLp8UNNef1wB3UGiTkCmpjD+fPe+NQBM9P9o9hePIjfXTdJ2QIzyx6 hXFgBrbYrKoujKn3yg+X15unfXy6z9HuUY6dYbeZRm1qD43O176t11KipR/o4VVdZeVZ 0ROA== X-Forwarded-Encrypted: i=1; AJvYcCVn1/FAFOz4z4Gby3/2Bxc3/HqksMEWq1zu5rk8fx9O0VYrfZdE0g/KCPqOixrUDXc+RjsZhXYqtg==@kvack.org X-Gm-Message-State: AOJu0YxHBUJSTdz11z38UAnZC/s7YtjwgE43TWWohp1BOLIAIw9bvvXe xN7bPl3AZjZxMu+xKLF332byP67W6++CjVjh5j/M+CB68RVONa7gKDgodyREgPJ702JgnkXsMGW o1Q== X-Google-Smtp-Source: AGHT+IGtVg+pmzlMfUNmBABZ8fcc/vYjedj6yHOltiegFss4vfkD05ygP4e5H1ECoxnohN55I5GlusLW3iM= X-Received: from surenb-desktop.mtv.corp.google.com ([2a00:79e0:2e3f:8:af45:bbc1:fc75:e695]) (user=surenb job=sendgmr) by 2002:a05:690c:5811:b0:6e3:8562:fbf with SMTP id 00721157ae682-6eebd2b241fmr6217b3.3.1732061317506; Tue, 19 Nov 2024 16:08:37 -0800 (PST) Date: Tue, 19 Nov 2024 16:08:24 -0800 In-Reply-To: <20241120000826.335387-1-surenb@google.com> Mime-Version: 1.0 References: <20241120000826.335387-1-surenb@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241120000826.335387-4-surenb@google.com> Subject: [PATCH v4 3/5] mm: mark vma as detached until it's added into vma tree From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: D743D40017 X-Stat-Signature: aubphogtifwwrd4yt37w4nqbudk1ajf1 X-HE-Tag: 1732061279-374192 X-HE-Meta: U2FsdGVkX193hzCAhYCaUMV6OsLaECJdBheMxTPiveH2Suj0wQXUq0YQf/FfGxmSRiWDaP8D/47GKqIliLCc5Yms43Yj4pOp0tESpXR4tykNuZlhrQquMG85dEIAZaEfQKPJyUdx5wbLlAMZJF53lGzORvVsbf8pLwbc4ZNugGMOp9Qu0l2zpiNyrC5NozY8wK/BisPnuG5UZW3V5d4R7RzN7EwU3b3LtUlnnH7ASJAGzf9yDqyzScRNpZpwjMnODote9L+6R2xRIjlJJGFU8LNmceOB1wNciSDBy4dTRJazT3rZkhn0XyDrTOygjms9u3DH2dFvNPIr5hgYSo1KT5wf2+wb/Fo3MTGeQqtuOpJOcSXLjg0Yn9+jt3cReGY/ZyNYiUjipfrCPgmfhrDnCuGoJSsEQdyo37q9q38exz8TY3KS2VfkBY3+RBlvMVyY3+iXEwoP/OI+roA7fwpS5ps0jmFJGtWfRQFB1qvPM4EYbI1qWU38Zk6Q2ihgItqFE/Qzql8GGfA+elgvxczgC7JvHOaM7AMIlGkbIfoj6of/aNqrB20lqAFp+SyepsRyJ1kuaxv96PddokEobbtYIkTWMpslqe8KotnFVBuYIDxpju/VtlXwTqSfXtzWKHmdfS74TqYwExDPk829hcRIr/LyatcATNEcAVc4M/5puuvGXSFMCHYJ1DUnLPmiySxJJmsUCltIk+vNKGaRR8O4NiZvMSLLSCU2VR8ug+dCZRKL1qpp/Clxjb53lE85MptJ5S3F5WKmsXxN/TVk3n51jL3w31IzTOEUFfpOntAUb79ZZ1pE+UsnLYRLIsys20SBf2dNkKYjMsHCDoK/4/NVV1N0VPfmHYY9/2K5qI5N4OIBVFOWdOofgzvnXCQLF3vQoGPOy57VHb85miPjB6RRiuYQnHA2Vjg5bWOk2pIWsdBZR9tjHuNvgp9ttbQCVgqXZr41AgZHRiL+Jvqjdmz V2kP0rQ6 eYeN+gXVQ4jPFpXtZRT5ZGPa1v/PUg6j1mP4G7mZUkiPZxlV4EZPpk8atpBHkkpBtDZbx8tkDTh0yRqcEiZZu6k6U6KVKYC61sHG/kYB2PURyfZqKh4gcaXKLDJ14IAsIiFzSUAkydyNHESO8+5Nu/WjCLv5RAvBP1D33sgZx6RwT7S6EZBvvHsdjrfN9qCir3tZpR0YftndjUvcU7kx7W5Dj2yS0yo6tp2apghW782AzRtuyPZ1xvI3DctEMaKMRZqaikBcHL9SC3WtxPGVYP0ql06ln159wZZFnNtBIYStSxm48yX4hB1u0c4Is91bFHaRFOwmQo4uP1MtzuK1laSRtoyxgzBvjWK04KeFsBommPIa/cAi6OI0J+M1Be5SCVqpPEZ9jHtHnIhk6vKHkVKD9bCuf71vgDoyUaHKPbuCQqYBZ3FCUjIsyPpdncx7t/bAPnEai+9TJ2NglxwoTtZpDofQmGIAhu8oJO4viYhXRCPS++qZ911hxPp48/9Qv+XGPrR/sZUBr5l2DcLpPmtlTrMhD2Y3HMBPcwREqnjAbVY0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Current implementation does not set detached flag when a VMA is first allocated. This does not represent the real state of the VMA, which is detached until it is added into mm's VMA tree. Fix this by marking new VMAs as detached and resetting detached flag only after VMA is added into a tree. Introduce vma_mark_attached() to make the API more readable and to simplify possible future cleanup when vma->vm_mm might be used to indicate detached vma and vma_mark_attached() will need an additional mm parameter. Signed-off-by: Suren Baghdasaryan Reviewed-by: Shakeel Butt --- include/linux/mm.h | 27 ++++++++++++++++++++------- kernel/fork.c | 4 ++++ mm/memory.c | 2 +- mm/vma.c | 6 +++--- mm/vma.h | 2 ++ tools/testing/vma/vma_internal.h | 17 ++++++++++++----- 6 files changed, 42 insertions(+), 16 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 737c003b0a1e..dd1b6190df28 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -808,12 +808,21 @@ static inline void vma_assert_locked(struct vm_area_struct *vma) vma_assert_write_locked(vma); } -static inline void vma_mark_detached(struct vm_area_struct *vma, bool detached) +static inline void vma_mark_attached(struct vm_area_struct *vma) +{ + vma->detached = false; +} + +static inline void vma_mark_detached(struct vm_area_struct *vma) { /* When detaching vma should be write-locked */ - if (detached) - vma_assert_write_locked(vma); - vma->detached = detached; + vma_assert_write_locked(vma); + vma->detached = true; +} + +static inline bool is_vma_detached(struct vm_area_struct *vma) +{ + return vma->detached; } static inline void release_fault_lock(struct vm_fault *vmf) @@ -844,8 +853,8 @@ static inline void vma_end_read(struct vm_area_struct *vma) {} static inline void vma_start_write(struct vm_area_struct *vma) {} static inline void vma_assert_write_locked(struct vm_area_struct *vma) { mmap_assert_write_locked(vma->vm_mm); } -static inline void vma_mark_detached(struct vm_area_struct *vma, - bool detached) {} +static inline void vma_mark_attached(struct vm_area_struct *vma) {} +static inline void vma_mark_detached(struct vm_area_struct *vma) {} static inline struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, unsigned long address) @@ -878,7 +887,10 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) vma->vm_mm = mm; vma->vm_ops = &vma_dummy_vm_ops; INIT_LIST_HEAD(&vma->anon_vma_chain); - vma_mark_detached(vma, false); +#ifdef CONFIG_PER_VMA_LOCK + /* vma is not locked, can't use vma_mark_detached() */ + vma->detached = true; +#endif vma_numab_state_init(vma); vma_lock_init(vma); } @@ -1073,6 +1085,7 @@ static inline int vma_iter_bulk_store(struct vma_iterator *vmi, if (unlikely(mas_is_err(&vmi->mas))) return -ENOMEM; + vma_mark_attached(vma); return 0; } diff --git a/kernel/fork.c b/kernel/fork.c index 7823797e31d2..f0cec673583c 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -465,6 +465,10 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) data_race(memcpy(new, orig, sizeof(*new))); vma_lock_init(new); INIT_LIST_HEAD(&new->anon_vma_chain); +#ifdef CONFIG_PER_VMA_LOCK + /* vma is not locked, can't use vma_mark_detached() */ + new->detached = true; +#endif vma_numab_state_init(new); dup_anon_vma_name(orig, new); diff --git a/mm/memory.c b/mm/memory.c index 209885a4134f..d0197a0c0996 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -6279,7 +6279,7 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, goto inval; /* Check if the VMA got isolated after we found it */ - if (vma->detached) { + if (is_vma_detached(vma)) { vma_end_read(vma); count_vm_vma_lock_event(VMA_LOCK_MISS); /* The area was replaced with another one */ diff --git a/mm/vma.c b/mm/vma.c index 8a454a7bbc80..73104d434567 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -295,7 +295,7 @@ static void vma_complete(struct vma_prepare *vp, struct vma_iterator *vmi, if (vp->remove) { again: - vma_mark_detached(vp->remove, true); + vma_mark_detached(vp->remove); if (vp->file) { uprobe_munmap(vp->remove, vp->remove->vm_start, vp->remove->vm_end); @@ -1220,7 +1220,7 @@ static void reattach_vmas(struct ma_state *mas_detach) mas_set(mas_detach, 0); mas_for_each(mas_detach, vma, ULONG_MAX) - vma_mark_detached(vma, false); + vma_mark_attached(vma); __mt_destroy(mas_detach->tree); } @@ -1295,7 +1295,7 @@ static int vms_gather_munmap_vmas(struct vma_munmap_struct *vms, if (error) goto munmap_gather_failed; - vma_mark_detached(next, true); + vma_mark_detached(next); nrpages = vma_pages(next); vms->nr_pages += nrpages; diff --git a/mm/vma.h b/mm/vma.h index 388d34748674..2e680f357ace 100644 --- a/mm/vma.h +++ b/mm/vma.h @@ -162,6 +162,7 @@ static inline int vma_iter_store_gfp(struct vma_iterator *vmi, if (unlikely(mas_is_err(&vmi->mas))) return -ENOMEM; + vma_mark_attached(vma); return 0; } @@ -385,6 +386,7 @@ static inline void vma_iter_store(struct vma_iterator *vmi, __mas_set_range(&vmi->mas, vma->vm_start, vma->vm_end - 1); mas_store_prealloc(&vmi->mas, vma); + vma_mark_attached(vma); } static inline unsigned long vma_iter_addr(struct vma_iterator *vmi) diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_internal.h index 11c2c38ca4e8..2fed366d20ef 100644 --- a/tools/testing/vma/vma_internal.h +++ b/tools/testing/vma/vma_internal.h @@ -414,13 +414,17 @@ static inline void vma_lock_init(struct vm_area_struct *vma) vma->vm_lock_seq = UINT_MAX; } +static inline void vma_mark_attached(struct vm_area_struct *vma) +{ + vma->detached = false; +} + static inline void vma_assert_write_locked(struct vm_area_struct *); -static inline void vma_mark_detached(struct vm_area_struct *vma, bool detached) +static inline void vma_mark_detached(struct vm_area_struct *vma) { /* When detaching vma should be write-locked */ - if (detached) - vma_assert_write_locked(vma); - vma->detached = detached; + vma_assert_write_locked(vma); + vma->detached = true; } extern const struct vm_operations_struct vma_dummy_vm_ops; @@ -431,7 +435,8 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) vma->vm_mm = mm; vma->vm_ops = &vma_dummy_vm_ops; INIT_LIST_HEAD(&vma->anon_vma_chain); - vma_mark_detached(vma, false); + /* vma is not locked, can't use vma_mark_detached() */ + vma->detached = true; vma_lock_init(vma); } @@ -457,6 +462,8 @@ static inline struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) memcpy(new, orig, sizeof(*new)); vma_lock_init(new); INIT_LIST_HEAD(&new->anon_vma_chain); + /* vma is not locked, can't use vma_mark_detached() */ + new->detached = true; return new; } From patchwork Wed Nov 20 00:08:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13880641 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E080D6C2BA for ; Wed, 20 Nov 2024 00:08:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2B5216B0093; Tue, 19 Nov 2024 19:08:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 23DD46B0096; Tue, 19 Nov 2024 19:08:43 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EC2356B0098; Tue, 19 Nov 2024 19:08:42 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id C723C6B0093 for ; Tue, 19 Nov 2024 19:08:42 -0500 (EST) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 88B4D40356 for ; Wed, 20 Nov 2024 00:08:42 +0000 (UTC) X-FDA: 82804536360.11.EA2E4DE Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) by imf09.hostedemail.com (Postfix) with ESMTP id 9FE2014001C for ; Wed, 20 Nov 2024 00:08:05 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=4s7MKeDy; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf09.hostedemail.com: domain of 3hyg9ZwYKCA4685s1pu22uzs.q20zw18B-00y9oqy.25u@flex--surenb.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3hyg9ZwYKCA4685s1pu22uzs.q20zw18B-00y9oqy.25u@flex--surenb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732061136; a=rsa-sha256; cv=none; b=VAa5sgkPeKxJ9Cd3Z5Q3vGQ0YqxHd/u6Y/GMZAnoMf8sMkAE7yu9Z3iDqzkTUCS4Fox/I4 UUvM4+//P0rYSppocIRekelt/OLaOlRV+5PL3+paIL40RCRFtJZwRkR0GUjqNwckUk0x2C 4+OMMvMIDLo5WAImeA4JulLSxEaY35s= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=4s7MKeDy; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf09.hostedemail.com: domain of 3hyg9ZwYKCA4685s1pu22uzs.q20zw18B-00y9oqy.25u@flex--surenb.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3hyg9ZwYKCA4685s1pu22uzs.q20zw18B-00y9oqy.25u@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732061136; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=AmFrdKPFj3DbZBVymOfq6i8ksKtG1S3AHOtitvBDl5M=; b=QusaY/i3VilnbKA1oPA9INoY7LFZu+G5ikAv1/26MY1u/K0yjbrsm62yqMfszfhJsGX6mx hdTUoP4pubnaUJ8zZ9YZtkXWe4CZus+NT5t/TtIQC70O2xL/kIywJ19ZF42G7/L9tHKOOM oxOr5CHfO1vto6A1nuG9kVLZLsbfwCU= Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-6eeb23afb49so18854467b3.3 for ; Tue, 19 Nov 2024 16:08:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732061320; x=1732666120; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=AmFrdKPFj3DbZBVymOfq6i8ksKtG1S3AHOtitvBDl5M=; b=4s7MKeDyaENrX+jJB5PllN4yU2eL/WwIgwoDr48a4joZrgyFcv8krBaJD9D2AdYC3G VqohiLRSUBHSjbSRVr8mx5RkiWHBp6Rd+8jpDbfFDDJ3euV4Sg2XGV4KO27TBVmNches N3RmsTNrDnG6r1IJz0mUFD8l22k44EgPYVtxLdZ5rG0xEEJXLwBGYR8fvq2pP8SnUAl3 WqUC44Xdif2t/l5STgwCFP1zLcFu/IFGVrvZWi/x079h/VD2bz9FMnamw9Do190NVm2m bYIiCLuIZvurXqNC07OqX7jFQajrEniKEpOI8o1dyn4qj+CXplv+SKjC+c96wC/Jixjy nZvA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732061320; x=1732666120; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=AmFrdKPFj3DbZBVymOfq6i8ksKtG1S3AHOtitvBDl5M=; b=HjMb8qTTtDlWx/N/cx4hbwW84lQ8oGlC7486v3+UKctz/32arqMb9z+hivbl5KxllZ LpWFjlVzRdXQe3zx2THN+/gdyVsDUnjBhrU4Rnc+Sz+Yk8Dzi49sVa/0krfPvaBWnWEB nUZ99B2w44t5OoVvfZ3L1Fyb1nzYFpaD9CZVfJMnFPGzjHFotRpqLgGpwpVVnoHpHoSw siCJVFdP8tNQr+8xmyg3SSTdMq3zfd0m4ZNB14hloZKdjrq3HXr8j50Fs7RVF1kpEVox 0y3GRVUqpaPCgSUxGzKCBClXwi7VYcKU1WBxAamLpUnSrGZ1oKCCO9r8hiIj778UouBH GHcg== X-Forwarded-Encrypted: i=1; AJvYcCVBgitKor+HAYEJeoeGAirWquE4NBK7/vWBDtBYFr1zPE2fg8xB/qQFpZ8R5qftLz58ISTrLxrfTA==@kvack.org X-Gm-Message-State: AOJu0Yw6bYW7pb2RN6orvwfh8jAC2bIakH9BPslD7oqVVanKv8Bk+GBq PyAquHulonCR4de7pBmQcRwsQ2vcB6op3sGcTanfB2lBruO05Uwa+YC5tlZua6HlO8ZtTNc8unp Emg== X-Google-Smtp-Source: AGHT+IGF9wx4sL6ywqvbmTtYVKg94zHd41Cyf5FF+rVNBMNjdPSbsTlqGQT3TJHJNaszHYHir437UZk+8kw= X-Received: from surenb-desktop.mtv.corp.google.com ([2a00:79e0:2e3f:8:af45:bbc1:fc75:e695]) (user=surenb job=sendgmr) by 2002:a05:690c:112:b0:6ea:e559:d69c with SMTP id 00721157ae682-6eebd2afdcdmr346087b3.5.1732061319967; Tue, 19 Nov 2024 16:08:39 -0800 (PST) Date: Tue, 19 Nov 2024 16:08:25 -0800 In-Reply-To: <20241120000826.335387-1-surenb@google.com> Mime-Version: 1.0 References: <20241120000826.335387-1-surenb@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241120000826.335387-5-surenb@google.com> Subject: [PATCH v4 4/5] mm: make vma cache SLAB_TYPESAFE_BY_RCU From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Stat-Signature: 16hkbfgmgqx8u8qn6wdatw5x7b4ibyy6 X-Rspamd-Queue-Id: 9FE2014001C X-Rspamd-Server: rspam08 X-Rspam-User: X-HE-Tag: 1732061285-303375 X-HE-Meta: U2FsdGVkX18jRg8oO+hLCKf73OCLwA4Jf/bYrXne4E4DIMFWG37nLqx8uWKAOX0/B6H39u9Dhd4vdEazifuP5L8YY8iiWpFpPZ8QhwcQgzMUFcNA1IXdZyFT4/cdsfNi95hm/GWe+vV59NzX1MO5DTS2ZzdA1RSINOZGAHDKuhmf0/Dk7Pzwjz/IlbCL7ROshD6MggFRAwS20k/L3NeJ7sqZFRUSFFbKm+SRHTxGMtuFxhPblSk98u7lItuPRlHPN7xuo9KLo5m3IX7eCnK968KhE0Hn6TuLkFuc7QL8Rw3eTOuEaC4Tg/6mgRL7s0yzYD5rNOXg9CHaeuFBU/S5XQ+GghfqIjWdXqicvuTk3vbNXchEf5dMk+KCMm4qzb238iO+5DryCoA2jaat1dkjHIMMvPJw2wO9IGmEe+mM1W6SfWknqQBBPj6RcrGjTTPzI1VunQHdEBktazv8QYLDYhVMnxzLg0+Ku7TCNs3xN7Ie4ldW80m1Wm2kaTHP1zRon8WqKhh+7mG8ehrGe1lTmbY+g+Z6DFtunVriw9eK39y7Y/9lb4cmNjj8pee2GEz9RTFewVCWh4+ip1fBSz4P39y3owxZUEQrNPIwhq/fgS/QkaRg39fjmjx7MEovFCW3WL8U0W5p071FOcfDdngaZ0WOqi9iY2uBXQlxicWQF8UMBJiBKTTV5Ki/VwKafRDMX12jv+VGhbleFSl3qix2XSa/2d9TypecnXFAkcSn02R4DEP2jbmeeaMAcOI81ohhs/rFQg2+JYRfJZozRP5k6XHil8Rb33SEAzv9TsgS57v0yu9OsfWD4Q6w4rI857DI+XMAs1d4o8ZS79xtR04J9hzu5oZZMgqRHHGwlXudkvk80aeFptuEC+2QLFzpQHwLC2DHoE1JGeu0ivVKrW5ztHEIUYcCtdrHZor6l0bTlzsepPEeUtg3BFSUdMjt1X7otvfOu58eNPPSP93fvfa nuRnPIDD JpozxX9UAC3SpJq83Lx0S9b2w+xCtFHDiv/k7a9n+OkOZVxdwNpfpER0y7MGTUMnQNuiV/TBHMro2VH8Nhl7/HRh4+JZ3jtwE/78HrodDXoDNuF39fMCU0UuatY8IKNAoOvPZ65coZ1mx9nxMMVH/CIlaa8COAbDTVorsyko2//I6D+EmRzTiAVR05QsQd7nTPwgTeBR+k5cd5B1yyo8Co+daL1WyiAt18hFUvm4BY8G3oxNV8MJnBBrfo1/t9ysvp3Re+uKKyHXJGd8Ldm5yZQKnTkJYXZlxGZ8bcTmgQutwYAeM1Va/PleKHPLLv47+8rrV4v19YE3f/39EPjpR/xLxkA2Fj88OAyP0N64zCgaRqMql0NQMHhewxIw4CsKeTOoaxrOnj/3o6qIdD/Dn489PgTkkAvfasWfBbzee5aFc7QCm77sf6TKdzRnEZ9ZUMP5PWUDie4L/L/mJFoHbT59hcByuD9t4n2Kzabh3nptsODq6SLvhgERG8kb0y7JvYBPA/eQM79Sq9aDHqvSYxkfb/soptTy9evsqOn8l3IWtJDVfxFZaIpR4NwOZMBy80vlVvWl3NKzx3ZBFU6WZQyeaCg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: To enable SLAB_TYPESAFE_BY_RCU for vma cache we need to ensure that object reuse before RCU grace period is over will be detected inside lock_vma_under_rcu(). lock_vma_under_rcu() enters RCU read section, finds the vma at the given address, locks the vma and checks if it got detached or remapped to cover a different address range. These last checks are there to ensure that the vma was not modified after we found it but before locking it. vma reuse introduces several new possibilities: 1. vma can be reused after it was found but before it is locked; 2. vma can be reused and reinitialized (including changing its vm_mm) while being locked in vma_start_read(); 3. vma can be reused and reinitialized after it was found but before it is locked, then attached at a new address or to a new mm while being read-locked; For case #1 current checks will help detecting cases when: - vma was reused but not yet added into the tree (detached check) - vma was reused at a different address range (address check); We are missing the check for vm_mm to ensure the reused vma was not attached to a different mm. This patch adds the missing check. For case #2, we pass mm to vma_start_read() to prevent access to unstable vma->vm_mm. For case #3, we ensure the order in which vma->detached flag and vm_start/vm_end/vm_mm are set and checked. vma gets attached after vm_start/vm_end/vm_mm were set and lock_vma_under_rcu() should check vma->detached before checking vm_start/vm_end/vm_mm. This is required because attaching vma happens without vma write-lock, as opposed to vma detaching, which requires vma write-lock. This patch adds memory barriers inside is_vma_detached() and vma_mark_attached() needed to order reads and writes to vma->detached vs vm_start/vm_end/vm_mm. After these provisions, SLAB_TYPESAFE_BY_RCU is added to vm_area_cachep. This will facilitate vm_area_struct reuse and will minimize the number of call_rcu() calls. Adding a freeptr_t into vm_area_struct (unioned with vm_start/vm_end) could be used to avoids bloating the structure, however currently custom free pointers are not supported in combination with a ctor (see the comment for kmem_cache_args.freeptr_offset). Signed-off-by: Suren Baghdasaryan --- include/linux/mm.h | 60 +++++++++++++++++++++++++++----- include/linux/mm_types.h | 13 +++---- kernel/fork.c | 53 +++++++++++++++++----------- mm/memory.c | 15 +++++--- mm/vma.c | 2 +- tools/testing/vma/vma_internal.h | 7 ++-- 6 files changed, 103 insertions(+), 47 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index dd1b6190df28..2a4794b7a513 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -257,7 +257,7 @@ struct vm_area_struct *vm_area_alloc(struct mm_struct *); struct vm_area_struct *vm_area_dup(struct vm_area_struct *); void vm_area_free(struct vm_area_struct *); /* Use only if VMA has no other users */ -void __vm_area_free(struct vm_area_struct *vma); +void vm_area_free_unreachable(struct vm_area_struct *vma); #ifndef CONFIG_MMU extern struct rb_root nommu_region_tree; @@ -690,12 +690,32 @@ static inline void vma_lock_init(struct vm_area_struct *vma) vma->vm_lock_seq = UINT_MAX; } +#define VMA_BEFORE_LOCK offsetof(struct vm_area_struct, vm_lock) +#define VMA_LOCK_END(vma) \ + (((void *)(vma)) + offsetofend(struct vm_area_struct, vm_lock)) +#define VMA_AFTER_LOCK \ + (sizeof(struct vm_area_struct) - offsetofend(struct vm_area_struct, vm_lock)) + +static inline void vma_clear(struct vm_area_struct *vma) +{ + /* Preserve vma->vm_lock */ + memset(vma, 0, VMA_BEFORE_LOCK); + memset(VMA_LOCK_END(vma), 0, VMA_AFTER_LOCK); +} + +static inline void vma_copy(struct vm_area_struct *new, struct vm_area_struct *orig) +{ + /* Preserve vma->vm_lock */ + data_race(memcpy(new, orig, VMA_BEFORE_LOCK)); + data_race(memcpy(VMA_LOCK_END(new), VMA_LOCK_END(orig), VMA_AFTER_LOCK)); +} + /* * Try to read-lock a vma. The function is allowed to occasionally yield false * locked result to avoid performance overhead, in which case we fall back to * using mmap_lock. The function should never yield false unlocked result. */ -static inline bool vma_start_read(struct vm_area_struct *vma) +static inline bool vma_start_read(struct mm_struct *mm, struct vm_area_struct *vma) { /* * Check before locking. A race might cause false locked result. @@ -704,7 +724,7 @@ static inline bool vma_start_read(struct vm_area_struct *vma) * we don't rely on for anything - the mm_lock_seq read against which we * need ordering is below. */ - if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(vma->vm_mm->mm_lock_seq.sequence)) + if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(mm->mm_lock_seq.sequence)) return false; if (unlikely(down_read_trylock(&vma->vm_lock.lock) == 0)) @@ -721,7 +741,7 @@ static inline bool vma_start_read(struct vm_area_struct *vma) * after it has been unlocked. * This pairs with RELEASE semantics in vma_end_write_all(). */ - if (unlikely(vma->vm_lock_seq == raw_read_seqcount(&vma->vm_mm->mm_lock_seq))) { + if (unlikely(vma->vm_lock_seq == raw_read_seqcount(&mm->mm_lock_seq))) { up_read(&vma->vm_lock.lock); return false; } @@ -810,7 +830,15 @@ static inline void vma_assert_locked(struct vm_area_struct *vma) static inline void vma_mark_attached(struct vm_area_struct *vma) { - vma->detached = false; + /* + * This pairs with smp_rmb() inside is_vma_detached(). + * vma is marked attached after all vma modifications are done and it + * got added into the vma tree. All prior vma modifications should be + * made visible before marking the vma attached. + */ + smp_wmb(); + /* This pairs with READ_ONCE() in is_vma_detached(). */ + WRITE_ONCE(vma->detached, false); } static inline void vma_mark_detached(struct vm_area_struct *vma) @@ -822,7 +850,18 @@ static inline void vma_mark_detached(struct vm_area_struct *vma) static inline bool is_vma_detached(struct vm_area_struct *vma) { - return vma->detached; + bool detached; + + /* This pairs with WRITE_ONCE() in vma_mark_attached(). */ + detached = READ_ONCE(vma->detached); + /* + * This pairs with smp_wmb() inside vma_mark_attached() to ensure + * vma->detached is read before vma attributes read later inside + * lock_vma_under_rcu(). + */ + smp_rmb(); + + return detached; } static inline void release_fault_lock(struct vm_fault *vmf) @@ -847,7 +886,11 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, #else /* CONFIG_PER_VMA_LOCK */ static inline void vma_lock_init(struct vm_area_struct *vma) {} -static inline bool vma_start_read(struct vm_area_struct *vma) +static inline void vma_clear(struct vm_area_struct *vma) + { memset(vma, 0, sizeof(*vma)); } +static inline void vma_copy(struct vm_area_struct *new, struct vm_area_struct *orig) + { data_race(memcpy(new, orig, sizeof(*new))); } +static inline bool vma_start_read(struct mm_struct *mm, struct vm_area_struct *vma) { return false; } static inline void vma_end_read(struct vm_area_struct *vma) {} static inline void vma_start_write(struct vm_area_struct *vma) {} @@ -883,7 +926,7 @@ extern const struct vm_operations_struct vma_dummy_vm_ops; static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) { - memset(vma, 0, sizeof(*vma)); + vma_clear(vma); vma->vm_mm = mm; vma->vm_ops = &vma_dummy_vm_ops; INIT_LIST_HEAD(&vma->anon_vma_chain); @@ -892,7 +935,6 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) vma->detached = true; #endif vma_numab_state_init(vma); - vma_lock_init(vma); } /* Use when VMA is not part of the VMA tree and needs no locking */ diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 5c4bfdcfac72..8f6b0c935c2b 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -667,15 +667,10 @@ struct vma_numab_state { struct vm_area_struct { /* The first cache line has the info for VMA tree walking. */ - union { - struct { - /* VMA covers [vm_start; vm_end) addresses within mm */ - unsigned long vm_start; - unsigned long vm_end; - }; -#ifdef CONFIG_PER_VMA_LOCK - struct rcu_head vm_rcu; /* Used for deferred freeing. */ -#endif + struct { + /* VMA covers [vm_start; vm_end) addresses within mm */ + unsigned long vm_start; + unsigned long vm_end; }; /* diff --git a/kernel/fork.c b/kernel/fork.c index f0cec673583c..76c68b041f8a 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -436,6 +436,11 @@ static struct kmem_cache *vm_area_cachep; /* SLAB cache for mm_struct structures (tsk->mm) */ static struct kmem_cache *mm_cachep; +static void vm_area_ctor(void *data) +{ + vma_lock_init(data); +} + struct vm_area_struct *vm_area_alloc(struct mm_struct *mm) { struct vm_area_struct *vma; @@ -462,8 +467,7 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) * orig->shared.rb may be modified concurrently, but the clone * will be reinitialized. */ - data_race(memcpy(new, orig, sizeof(*new))); - vma_lock_init(new); + vma_copy(new, orig); INIT_LIST_HEAD(&new->anon_vma_chain); #ifdef CONFIG_PER_VMA_LOCK /* vma is not locked, can't use vma_mark_detached() */ @@ -475,32 +479,37 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) return new; } -void __vm_area_free(struct vm_area_struct *vma) +static void __vm_area_free(struct vm_area_struct *vma, bool unreachable) { +#ifdef CONFIG_PER_VMA_LOCK + /* + * With SLAB_TYPESAFE_BY_RCU, vma can be reused and we need + * vma->detached to be set before vma is returned into the cache. + * This way reused object won't be used by readers until it's + * initialized and reattached. + * If vma is unreachable, there can be no other users and we + * can set vma->detached directly with no risk of a race. + * If vma is reachable, then it should have been already detached + * under vma write-lock or it was never attached. + */ + if (unreachable) + vma->detached = true; + else + VM_BUG_ON_VMA(!is_vma_detached(vma), vma); +#endif vma_numab_state_free(vma); free_anon_vma_name(vma); kmem_cache_free(vm_area_cachep, vma); } -#ifdef CONFIG_PER_VMA_LOCK -static void vm_area_free_rcu_cb(struct rcu_head *head) +void vm_area_free(struct vm_area_struct *vma) { - struct vm_area_struct *vma = container_of(head, struct vm_area_struct, - vm_rcu); - - /* The vma should not be locked while being destroyed. */ - VM_BUG_ON_VMA(rwsem_is_locked(&vma->vm_lock.lock), vma); - __vm_area_free(vma); + __vm_area_free(vma, false); } -#endif -void vm_area_free(struct vm_area_struct *vma) +void vm_area_free_unreachable(struct vm_area_struct *vma) { -#ifdef CONFIG_PER_VMA_LOCK - call_rcu(&vma->vm_rcu, vm_area_free_rcu_cb); -#else - __vm_area_free(vma); -#endif + __vm_area_free(vma, true); } static void account_kernel_stack(struct task_struct *tsk, int account) @@ -3135,9 +3144,11 @@ void __init proc_caches_init(void) sizeof(struct fs_struct), 0, SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT, NULL); - vm_area_cachep = KMEM_CACHE(vm_area_struct, - SLAB_HWCACHE_ALIGN|SLAB_NO_MERGE|SLAB_PANIC| - SLAB_ACCOUNT); + vm_area_cachep = kmem_cache_create("vm_area_struct", + sizeof(struct vm_area_struct), 0, + SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_TYPESAFE_BY_RCU| + SLAB_ACCOUNT, vm_area_ctor); + mmap_init(); nsproxy_cache_init(); } diff --git a/mm/memory.c b/mm/memory.c index d0197a0c0996..b5fbc71b46bd 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -6275,10 +6275,16 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, if (!vma) goto inval; - if (!vma_start_read(vma)) + if (!vma_start_read(mm, vma)) goto inval; - /* Check if the VMA got isolated after we found it */ + /* + * Check if the VMA got isolated after we found it. + * Note: vma we found could have been recycled and is being reattached. + * It's possible to attach a vma while it is read-locked, however a + * read-locked vma can't be detached (detaching requires write-locking). + * Therefore if this check passes, we have an attached and stable vma. + */ if (is_vma_detached(vma)) { vma_end_read(vma); count_vm_vma_lock_event(VMA_LOCK_MISS); @@ -6292,8 +6298,9 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, * fields are accessible for RCU readers. */ - /* Check since vm_start/vm_end might change before we lock the VMA */ - if (unlikely(address < vma->vm_start || address >= vma->vm_end)) + /* Check if the vma we locked is the right one. */ + if (unlikely(vma->vm_mm != mm || + address < vma->vm_start || address >= vma->vm_end)) goto inval_end_read; rcu_read_unlock(); diff --git a/mm/vma.c b/mm/vma.c index 73104d434567..050b83df3df2 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -382,7 +382,7 @@ void remove_vma(struct vm_area_struct *vma, bool unreachable) fput(vma->vm_file); mpol_put(vma_policy(vma)); if (unreachable) - __vm_area_free(vma); + vm_area_free_unreachable(vma); else vm_area_free(vma); } diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_internal.h index 2fed366d20ef..fd668d6cafc0 100644 --- a/tools/testing/vma/vma_internal.h +++ b/tools/testing/vma/vma_internal.h @@ -632,14 +632,15 @@ static inline void mpol_put(struct mempolicy *) { } -static inline void __vm_area_free(struct vm_area_struct *vma) +static inline void vm_area_free(struct vm_area_struct *vma) { free(vma); } -static inline void vm_area_free(struct vm_area_struct *vma) +static inline void vm_area_free_unreachable(struct vm_area_struct *vma) { - __vm_area_free(vma); + vma->detached = true; + vm_area_free(vma); } static inline void lru_add_drain(void) From patchwork Wed Nov 20 00:08:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13880642 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F4D8D6C2B9 for ; Wed, 20 Nov 2024 00:08:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 462D26B0096; Tue, 19 Nov 2024 19:08:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3C4456B0099; Tue, 19 Nov 2024 19:08:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 240866B009A; Tue, 19 Nov 2024 19:08:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id EE75F6B0096 for ; Tue, 19 Nov 2024 19:08:44 -0500 (EST) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 9C05B160880 for ; Wed, 20 Nov 2024 00:08:44 +0000 (UTC) X-FDA: 82804535436.25.6EE07C8 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf11.hostedemail.com (Postfix) with ESMTP id AB58E40004 for ; Wed, 20 Nov 2024 00:07:39 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=qgU2cnhc; spf=pass (imf11.hostedemail.com: domain of 3iSg9ZwYKCBA8A7u3rw44w1u.s421y3AD-220Bqs0.47w@flex--surenb.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3iSg9ZwYKCBA8A7u3rw44w1u.s421y3AD-220Bqs0.47w@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732061231; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=TVe9xIxnbmkl7SmN6iZyRrlBkQz17Arq0DLY4ssSJic=; b=YHM0/r0TPO85k6WrOZe4yS0aQ/wybX56oisWoku7aOVOUx+0Lp7TxZmKxeybqjtOkIeVqr I3PJEUaUL6Q/XMiDCapI8byJrHd2ML7QVJ/n4R2AAIHx1DVp8SsEJ7eqgjGMqm9W/1kpqK S/JY/PhgCP1RqZCp3gGfocPDz+PGLTs= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732061231; a=rsa-sha256; cv=none; b=1iG4cPZWtTzMvC78rDKFZOn7q1PDvvYbp7rU74ixxFUL5MvkGTUx4uTEnzIF9NtLeIFQ5y 5UGoLSfnGoMtW2VcfY19rFKmRmNrQZi2YfklC6AeYYcFkRXK7tXz9EYF4FERCOL6kY963Q Zuo8zsc2gRbFxnAHcq4X2LjpTjT9thA= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=qgU2cnhc; spf=pass (imf11.hostedemail.com: domain of 3iSg9ZwYKCBA8A7u3rw44w1u.s421y3AD-220Bqs0.47w@flex--surenb.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3iSg9ZwYKCBA8A7u3rw44w1u.s421y3AD-220Bqs0.47w@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e02fff66a83so5947248276.0 for ; Tue, 19 Nov 2024 16:08:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732061322; x=1732666122; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=TVe9xIxnbmkl7SmN6iZyRrlBkQz17Arq0DLY4ssSJic=; b=qgU2cnhc6iOk/2CejGbHayR1G3tYeWIaOsy37T0TYneskDK3+yiNlyk8XizKH+AYto +l9xdZVMkjqLVPxy3XfGC42rIfYoq+QacSB2yxm0lZTN8E8XCrQDZNSscZa2atgsXtGo QhKVjHsV1gRoUsFNYHtlCp8Ac1JbbyD8xUCRBK3XBlX/BjpTD+ZmzZXHU+r/3pb2WcIl tE3s51UV55UoTzcUeL+RnkweuEQaoQIyyxlA9ErC8Fsv4IMWnaInf8W+7pb0CfYZqf2k EuTWeGaFP8kwkzZQcYzOqdmFLXy7PJIdtL4pG5mEk+NF5WacMU3EvKkCvf1vg8PM+uRx zgmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732061322; x=1732666122; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=TVe9xIxnbmkl7SmN6iZyRrlBkQz17Arq0DLY4ssSJic=; b=C/d3cABynI41DEHcUVlBoLLHVWQ4v2ETElmTDYl35Ww+d9ewiE/nQ2MT0nRSXsvZCK Yiht4IgeLo9kVsm+T8GDNH6LLt/iJWVQD7bx8JdoDNvXFOE4+PgF7nFeOoyOatXDn4+1 7zmYaR2KcM4K5oqjVzp6uUPEjk8z3nFt7Sowr4E/7+g+uZZfq+5qEnJFkrAmGz3ZUJPX UD0IcgzwJioqX9vCb9U3BUIOcW6zfvMJewKXAwKa7fFhiM0fQKfTvRZs5xBuPqcwGFmx kmxa9UfcJpNmM5qMNwQSAErITkme1Jy8/e/wWJwb8WxU90FXi2Fyw+VcIYSTd+mFvuzW 9moA== X-Forwarded-Encrypted: i=1; AJvYcCWBdkWmFlZM/hYd7WOyk8JLpQD5olMOce+W1dpqr/W4dTyxLyaQTjvqWDODmFPIZW5NPism2dDJ7A==@kvack.org X-Gm-Message-State: AOJu0YyW1kqhQ9Zplqh1amcYtjlk/QdVsjV1Y1SlMYCik5hnQfuWwd4d ZE5pXuW5WE919QRp7+1YBLisO8SiCTcwAPVmRMIo2iNWFnlMmqQNQuWEiKu2rsAdbtTcrYTiYnL 8EQ== X-Google-Smtp-Source: AGHT+IFNThkOcHUKsCtTkSyc+T5kvPSN7H0YRbmdu7qNTw5Do0Nv7SXuJL8X2fZ3WwFBn19KFwdE3O6n0sE= X-Received: from surenb-desktop.mtv.corp.google.com ([2a00:79e0:2e3f:8:af45:bbc1:fc75:e695]) (user=surenb job=sendgmr) by 2002:a25:aa50:0:b0:e2b:da82:f695 with SMTP id 3f1490d57ef6-e38cb5f8230mr276276.6.1732061321895; Tue, 19 Nov 2024 16:08:41 -0800 (PST) Date: Tue, 19 Nov 2024 16:08:26 -0800 In-Reply-To: <20241120000826.335387-1-surenb@google.com> Mime-Version: 1.0 References: <20241120000826.335387-1-surenb@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241120000826.335387-6-surenb@google.com> Subject: [PATCH v4 5/5] docs/mm: document latest changes to vm_lock From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Rspamd-Server: rspam10 X-Stat-Signature: qe7uwzhb65apecm96ewq3ifyy1bkg4p1 X-Rspamd-Queue-Id: AB58E40004 X-Rspam-User: X-HE-Tag: 1732061259-673581 X-HE-Meta: U2FsdGVkX19kGE7gPffQ3/51lVf2lPoue+9xI2AreOKvv+bPCoREs78li8gRTqw+DxUOBYeYiiOM6ytHCXZcwhSpZtyPJDXOF6vu4Fi0Gv0Cys/MtxD2/j3NUq5tctwgGPcw3eV8ji2rUk2wkveAzNFr+X6sO494B3hojuVlJq1dUX64fQ+BKmPsupnnhid/LSu2Nk2WJPzhFI0uviePpzvtTTwzoaTvnFwcQyfx2dhmFIbL3dTd3rQ2aznWu6wLELPlOP+KEAh0xGKdeBAvj59IIw6Q4qLlv2+XLyjtCKqR5NUoBOQI++u/DJ4RBm0v8Bf27WehS9u5+M7dy+1LO0lx7CB5ypyVXDjoXC5IIsucFpqsZdr0i4U6qcvHCoJ3uvp3OKWYmMbDTb5cetWnOi9ii5MeejQPhACBxxhSFWDsgfTrbh2q0EABYkBV/Hc+pLad+JYV6cmL3MK827MENMOiKI4hyCxgucYr7BA/klao8aiaO7jEquY+mjZCcYaeSKZpZhc/0szPajoiuwT+ZAJwwjbYHHxynCio41aibHep6BGcSq+XpeO4a0CiVT4oW4Epwuqqk2hJbGLn+JASbYdyQfty3NBAY7x5i8YH92iZ68ZDIYs56VmFZ3yG4+L7RHT2SRIhxDUjpwsCU0lrm1rzhiXBpWNdJSeEzbCpcZN8I1shtj9m7BR8s9NLEHcLFakvEyrQMpCdY6EfxRP+8PMpzcTuGBUV39NtVUz4OoOxfe7kuiAWCKClIEjhsVD89tF4XURgwccvJ6VknZb7bNjkhcyNhLTTjqUTI94ph8cYfMjBMdzDu1Z7465GSjfe8OiSV7vzcLg9h5y8LEZYftUwxPeTAhFOS6dGeC8agpFG0EI0njKHKq34s7VhDi2kscnvOmz+3xLfqXQ0FXYEgB0On31SfDg+bRkcvPk1djcKNg3PgN6+3J+HuxTqaYDnsYy+M6y9HxdhCecxHe0 wBdHO9Rz uBmIkLCYdGSSonBzdzgYCirYyLn0kHeUSP+IGmX4O7YwWZ1w7r+Bo6UrHM78iUH92nLjxVWP4pZZmlGighxqmDJIPXO+/uFHGhfuAfPQk+En3IWNdHNawgG34EoZs5DYanm8AGzkpt5Ekr66HHH6NDtspeXNoB7ignWULzg/jhXFcAyuvoNmCHetmawUqSxDl0Dle0EL64wT9cx79N/q0DNkywWqds+zeKtsHf63u5GzPYxbf93gDNTlCdTKBwdE8yLMTKflcI7z5Ar3n9rXW3skFp8mgh4o4hZ81lA1maCeztTbi/mH816vVEheSCPlVwpl0SASF2bIoe7PNPla4Qs0bH0u9/Irjkdbtzwn5HXobqbNi6ydhsFxl6jS2efG3piVOY8sH9H+WFnG2zjhVsC+B+AZt63CjaYCCfJVOgomtKQw2HCEb7yCMrA+Ybl0uC1BZ/vmRHPgDOmeiyYR5+eHg2+9Vft/v5peV4xTN3JNgxOhAT28tlVloQPt/evGSyJ6TsqYU9Z2p5WxxBdXPzoqyfqowVhnKr79xQiw4BqfTHAhn63h7bbZ06ClFdj6Cyz1/8z4hCit9DOIv9CQLHSucTL4w+bSI1qVfwO3BP2oYF8A= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Change the documentation to reflect that vm_lock is integrated into vma. Document newly introduced vma_start_read_locked{_nested} functions. Signed-off-by: Suren Baghdasaryan Reviewed-by: Lorenzo Stoakes --- Documentation/mm/process_addrs.rst | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/Documentation/mm/process_addrs.rst b/Documentation/mm/process_addrs.rst index 1bf7ad010fc0..a18450b6496d 100644 --- a/Documentation/mm/process_addrs.rst +++ b/Documentation/mm/process_addrs.rst @@ -686,7 +686,11 @@ calls :c:func:`!rcu_read_lock` to ensure that the VMA is looked up in an RCU critical section, then attempts to VMA lock it via :c:func:`!vma_start_read`, before releasing the RCU lock via :c:func:`!rcu_read_unlock`. -VMA read locks hold the read lock on the :c:member:`!vma->vm_lock` semaphore for +In cases when the user already holds mmap read lock, :c:func:`!vma_start_read_locked` +and :c:func:`!vma_start_read_locked_nested` can be used. These functions always +succeed in acquiring VMA read lock. + +VMA read locks hold the read lock on the :c:member:`!vma.vm_lock` semaphore for their duration and the caller of :c:func:`!lock_vma_under_rcu` must release it via :c:func:`!vma_end_read`. @@ -750,7 +754,7 @@ keep VMAs locked across entirely separate write operations. It also maintains correct lock ordering. Each time a VMA read lock is acquired, we acquire a read lock on the -:c:member:`!vma->vm_lock` read/write semaphore and hold it, while checking that +:c:member:`!vma.vm_lock` read/write semaphore and hold it, while checking that the sequence count of the VMA does not match that of the mm. If it does, the read lock fails. If it does not, we hold the lock, excluding @@ -760,7 +764,7 @@ Importantly, maple tree operations performed in :c:func:`!lock_vma_under_rcu` are also RCU safe, so the whole read lock operation is guaranteed to function correctly. -On the write side, we acquire a write lock on the :c:member:`!vma->vm_lock` +On the write side, we acquire a write lock on the :c:member:`!vma.vm_lock` read/write semaphore, before setting the VMA's sequence number under this lock, also simultaneously holding the mmap write lock.