From patchwork Fri Jan 26 18:26:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lokesh Gidra X-Patchwork-Id: 13533124 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4BD9BC47422 for ; Fri, 26 Jan 2024 18:26:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A13B96B007D; Fri, 26 Jan 2024 13:26:53 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9C4356B0080; Fri, 26 Jan 2024 13:26:53 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 88C5E6B0081; Fri, 26 Jan 2024 13:26:53 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 799586B007D for ; Fri, 26 Jan 2024 13:26:53 -0500 (EST) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 474771A05C9 for ; Fri, 26 Jan 2024 18:26:53 +0000 (UTC) X-FDA: 81722293506.11.0DA5795 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) by imf01.hostedemail.com (Postfix) with ESMTP id 99DE54000E for ; Fri, 26 Jan 2024 18:26:51 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=4MUSWJJi; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf01.hostedemail.com: domain of 3avmzZQsKCGEKNJDRGFHCQ9FNNFKD.BNLKHMTW-LLJU9BJ.NQF@flex--lokeshgidra.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3avmzZQsKCGEKNJDRGFHCQ9FNNFKD.BNLKHMTW-LLJU9BJ.NQF@flex--lokeshgidra.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706293611; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=Y7e/0QvlOlM0ZCsxQZvvBJgZQTxVZVmdvhelyGmS3+g=; b=o/ehgxjmf6M16qxGXQ3rLZSLzv4PmM5J5MYrBtFPFSR+COnftGn18l8ruQzJRNaOVpbYXW b3sXivsqXq+J1BtWrXL+BQSyz2mgzJkRlttOeYVmbGoMibNtGuS/Qx/M5ItAoDcaGxDokI 1OKCIy6Iz/pqUXym0T3+DGKFkTA4hm0= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=4MUSWJJi; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf01.hostedemail.com: domain of 3avmzZQsKCGEKNJDRGFHCQ9FNNFKD.BNLKHMTW-LLJU9BJ.NQF@flex--lokeshgidra.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3avmzZQsKCGEKNJDRGFHCQ9FNNFKD.BNLKHMTW-LLJU9BJ.NQF@flex--lokeshgidra.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706293611; a=rsa-sha256; cv=none; b=dG6uhoZbwm6dAvnFjqDe0hax1Ji0GEWJy3yCJcxzSk2CVuYs2JnyfOPPBkn5qE0NviNPpf bjYMJOSSi8y8HvNf9QNmW7gXkAiV/IHx17yhkePHSotfd8EMxNLlsx1atlhYy6lULt9d8J AvLqyJgmRyuoLxeBwn97g8W1iVg/Loo= Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-5f6c12872fbso10942127b3.1 for ; Fri, 26 Jan 2024 10:26:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1706293610; x=1706898410; darn=kvack.org; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=Y7e/0QvlOlM0ZCsxQZvvBJgZQTxVZVmdvhelyGmS3+g=; b=4MUSWJJi6aypWmrhEjot4oQO6uZkxGQ9D+T/ZtT+VscuqvxXOhaJEdHk/ImSamP2zr dzOKtZY+qQeKiOPPJ3yzQSQ/CkyOUqegwv5869Xo4/Gr2r4FKSe5mhrzsDSkWBLdEJ6I LH/BrtVbLynIzCq4lNEw+zeoOU6WZQ5GWdtUIaVWrPtESPiCmpCL20I0qT79dK5HCGDY AW83FtRASczRGMeoHq38hgI1Km9Ot4SvtSxqjoSYtID6n8hXbsWG27qsjEGVf3dzbCez AvJzsPcFmPXcSItabSDgUC8nH3N+i3cyr6TaY72HQwVDv7v6T6wxPfZ56htD/axATAwB AOGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706293610; x=1706898410; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=Y7e/0QvlOlM0ZCsxQZvvBJgZQTxVZVmdvhelyGmS3+g=; b=BaL2xry/PrOxfa/JXJHAvPUYKCBtPJRTZ1WgsUA7W2n9FSgxZEBZSSMXPn42RnbhlS KdtJrZA619woXnIgZOYRmwxpjuBY1tlFDR924UcKm7IZJkExLFb5bvlopHhYGq5o82eK AhrFoPJf9XpA15gwgtpC7xhs7R+iicVRfMbDkqqdSXBIqALgLhhzrYjrsZGnhh6uWlUX q1kSxdwg0xmjBQK5hg+AAjExV4KHgkpYkiBjHuMMDrK107Rjg6dNaU6ZytXk6+SU/Zhv s1L68xV9uiHqTLu/mE5sQ5UfN3Id7V5bpPOSeWrIEmtxdEBeTwBhDuUzqNWRU0cxIHeh 2uZg== X-Gm-Message-State: AOJu0Yy68PXxxbs8WBEvClaq1r2fD5AcWW8+XpGH/ILPfjHxXubrpcIz cPsGpAsHR9EDp6NHW9Jf4cW2MXTKAOfHwMnsGiyt3D45SjOw95i94nDBW6GBbxRojq8rmdsXKQX Ow58ZGrN2MsSzFJjIs3I3nA== X-Google-Smtp-Source: AGHT+IGGu9TMuAH4sSdAiyZVOl8aGB0JU71txLWu58TspTrRV1aYkawA9tmwofI0MzKad8AKt638XeV95tHjbeN3nw== X-Received: from lg.mtv.corp.google.com ([2620:15c:211:202:cc8a:c6c9:a475:ebf]) (user=lokeshgidra job=sendgmr) by 2002:a05:690c:dd6:b0:5ff:82c7:1528 with SMTP id db22-20020a05690c0dd600b005ff82c71528mr62574ywb.5.1706293610619; Fri, 26 Jan 2024 10:26:50 -0800 (PST) Date: Fri, 26 Jan 2024 10:26:45 -0800 Mime-Version: 1.0 X-Mailer: git-send-email 2.43.0.429.g432eaa2c6b-goog Message-ID: <20240126182647.2748949-1-lokeshgidra@google.com> Subject: [PATCH 1/3] userfaultfd: move userfaultfd_ctx struct to header file From: Lokesh Gidra To: akpm@linux-foundation.org Cc: lokeshgidra@google.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, selinux@vger.kernel.org, surenb@google.com, kernel-team@android.com, aarcange@redhat.com, peterx@redhat.com, david@redhat.com, axelrasmussen@google.com, bgeffon@google.com, willy@infradead.org, jannh@google.com, kaleshsingh@google.com, ngeoffray@google.com, timmurray@google.com, rppt@kernel.org X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 99DE54000E X-Stat-Signature: xe7x8za8txa66nfs8aiehctxfu1bafyq X-HE-Tag: 1706293611-372964 X-HE-Meta: U2FsdGVkX1/neq9SOwBgbYNaMZv+l4TThoC+cr+QZZJNEyfEC9Fb0K++3pn937HdPCC3UNzMOHfEU68C1kZp9bZkBu19G2DCzgqcXfCaAT3j+QGM8NVnFmRhPHzsW15ZRjOiHolAPvsCKevK7UEz65LBBGjQcncAtgYGZ5Mx14wNn3OowH/wLL1aUJ6rDmayZAHE2iskz68R7hK2sttstfWNiZ6A80tiCzK5GzhfXOv+RDmrU1KL+t5H/6NyvTX2E4y5OEn8bxRrFbG6ouD8wNO2ZrjjaStb8ch6qN2+ENW/Pd0rJ36Vr7VH/ab9CYfX7Q4yXNg2V5Ai56zqKPSj77vObbdaiOUH3Buu7tYfRPw4uKJeAsvetWz90xf9tbV3hDKWdJG40ruxp9WZfCjgGQe3aVssjGPW+Gd21solcwrddWrEiqEQgjad7A6/PJKXTYzBcWqShpFY+05FY8dHl1iGQFDrvU7XMy0nwHWMkxgJ2vyXfkwxMNUBH73XpiPaSxXJ18EGA9UWn5VddCAzvZveyQZOSwFNh4WaD9jd10WT0Ok1kRcdtdI/J0ATvxDNLSSQTVMSNCwREFMKW0uNDyX2VgaZABhWmRNXn6hzlPqpOvwm2jcDd+KMXYa0W9bITa4/JbisFJyVlm6qyqWtEwrb1CQ46mwkEg7mtpepph3NSLMMnnCRFsGRyd7cd+McwLzQ0I4OPgoJjNVQWWerZa2E8SQXB3G4gb466q7VLXkI+r4v3MPF1Edk6lfFakbCPyf70Hf0rbwArN7pQ/WKRwAJREj1VGz3U673vD3e1jlnDLyMbn39ZWLadz5VYymVFlttmLb5B+0CNh5iQjcs2LN8X51/5QewFkYUlRbsLkBCP/tVAt8fRyXmmo9b9zFeXBXwOEN1S1Ah10Yy3yAsXM55/BH+MIFT8IAIXAual92eBuX0lz3hEDQBTZxMSF7/U183QjwjwOaxZMXHr2J znamDHUZ RohB2tQ1vcL+OzaJHqUrSF5ANyONo/dTd90Qrc2z4nCZ9nZy01/12oi41K7lBsqS9JloXgwwJss/KGlwZ3aOSZzIESxIjI1IrU24pWJmt1++j8OSRvY1JnIoPOS4XRHf6NE9+SRSF3dCg2FV0UhkdtGj5Cbg16UiBU7zdrjRgBS6+7H4ljdxdZT02GW0x2PMZPWi2J5wI3aBHEFsusIopmLg/JRH43JC0eDEv1ow0qvXRw5re7o4VxYxPSFp97K/jVgfCUDuUTTucgLWPn2AF0q75bwqrkHorZaheBqU/AgodLiPGd9UicOHzWA+sTuXseBYEjFd/G0VYDW677RdsZhMnrPesI7tDVQNe/eXrpA+6sIQj2GYSpl7GjvgSkD9rI9DvTY86MKN7wEc1Ji8CzZj9bW3yrtTBAo9Rf3Um0GYunNbrzPEZX3JIzwQFcpXVHK1zrYbxjVyTxQRr0IBnowAyzLFmDBFslmG3dRKvIQE2oVOvhMedCqyWEWgjMam099OxGo0yrGvyFMu+1L5JFpxcMSbDkf4zob8O1U+CbUbfmMebB88ftBd7FWodEBoNayupyu87P3LWgOIK+9f0BZJ31zkUeTu7g51mDwy+dBN5UZ7/Y9eH8dZC7qig1t4sTN2/j0R+3buoZJzYGPRHZF1CEK28MutFM5dCMPDMpUvygRN6TztbBh1GLQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Moving the struct to userfaultfd_k.h to be accessible from mm/userfaultfd.c. There are no other changes in the struct. This is required to prepare for using per-vma locks in userfaultfd operations. Signed-off-by: Lokesh Gidra --- fs/userfaultfd.c | 39 ----------------------------------- include/linux/userfaultfd_k.h | 39 +++++++++++++++++++++++++++++++++++ 2 files changed, 39 insertions(+), 39 deletions(-) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 959551ff9a95..af5ebaad2f1d 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -50,45 +50,6 @@ static struct ctl_table vm_userfaultfd_table[] = { static struct kmem_cache *userfaultfd_ctx_cachep __ro_after_init; -/* - * Start with fault_pending_wqh and fault_wqh so they're more likely - * to be in the same cacheline. - * - * Locking order: - * fd_wqh.lock - * fault_pending_wqh.lock - * fault_wqh.lock - * event_wqh.lock - * - * To avoid deadlocks, IRQs must be disabled when taking any of the above locks, - * since fd_wqh.lock is taken by aio_poll() while it's holding a lock that's - * also taken in IRQ context. - */ -struct userfaultfd_ctx { - /* waitqueue head for the pending (i.e. not read) userfaults */ - wait_queue_head_t fault_pending_wqh; - /* waitqueue head for the userfaults */ - wait_queue_head_t fault_wqh; - /* waitqueue head for the pseudo fd to wakeup poll/read */ - wait_queue_head_t fd_wqh; - /* waitqueue head for events */ - wait_queue_head_t event_wqh; - /* a refile sequence protected by fault_pending_wqh lock */ - seqcount_spinlock_t refile_seq; - /* pseudo fd refcounting */ - refcount_t refcount; - /* userfaultfd syscall flags */ - unsigned int flags; - /* features requested from the userspace */ - unsigned int features; - /* released */ - bool released; - /* memory mappings are changing because of non-cooperative event */ - atomic_t mmap_changing; - /* mm with one ore more vmas attached to this userfaultfd_ctx */ - struct mm_struct *mm; -}; - struct userfaultfd_fork_ctx { struct userfaultfd_ctx *orig; struct userfaultfd_ctx *new; diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h index e4056547fbe6..691d928ee864 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -36,6 +36,45 @@ #define UFFD_SHARED_FCNTL_FLAGS (O_CLOEXEC | O_NONBLOCK) #define UFFD_FLAGS_SET (EFD_SHARED_FCNTL_FLAGS) +/* + * Start with fault_pending_wqh and fault_wqh so they're more likely + * to be in the same cacheline. + * + * Locking order: + * fd_wqh.lock + * fault_pending_wqh.lock + * fault_wqh.lock + * event_wqh.lock + * + * To avoid deadlocks, IRQs must be disabled when taking any of the above locks, + * since fd_wqh.lock is taken by aio_poll() while it's holding a lock that's + * also taken in IRQ context. + */ +struct userfaultfd_ctx { + /* waitqueue head for the pending (i.e. not read) userfaults */ + wait_queue_head_t fault_pending_wqh; + /* waitqueue head for the userfaults */ + wait_queue_head_t fault_wqh; + /* waitqueue head for the pseudo fd to wakeup poll/read */ + wait_queue_head_t fd_wqh; + /* waitqueue head for events */ + wait_queue_head_t event_wqh; + /* a refile sequence protected by fault_pending_wqh lock */ + seqcount_spinlock_t refile_seq; + /* pseudo fd refcounting */ + refcount_t refcount; + /* userfaultfd syscall flags */ + unsigned int flags; + /* features requested from the userspace */ + unsigned int features; + /* released */ + bool released; + /* memory mappings are changing because of non-cooperative event */ + atomic_t mmap_changing; + /* mm with one ore more vmas attached to this userfaultfd_ctx */ + struct mm_struct *mm; +}; + extern vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason); /* A combined operation mode + behavior flags. */ From patchwork Fri Jan 26 18:26:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lokesh Gidra X-Patchwork-Id: 13533125 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE6D1C47DDB for ; Fri, 26 Jan 2024 18:26:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 336C96B0081; Fri, 26 Jan 2024 13:26:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2E6A76B0082; Fri, 26 Jan 2024 13:26:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 187986B0085; Fri, 26 Jan 2024 13:26:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 030E26B0081 for ; Fri, 26 Jan 2024 13:26:56 -0500 (EST) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 7BDA9804ED for ; Fri, 26 Jan 2024 18:26:55 +0000 (UTC) X-FDA: 81722293590.28.368EFD4 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by imf09.hostedemail.com (Postfix) with ESMTP id 7837014002E for ; Fri, 26 Jan 2024 18:26:53 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=JTVSD6KL; spf=pass (imf09.hostedemail.com: domain of 3bPmzZQsKCGMMPLFTIHJESBHPPHMF.DPNMJOVY-NNLWBDL.PSH@flex--lokeshgidra.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=3bPmzZQsKCGMMPLFTIHJESBHPPHMF.DPNMJOVY-NNLWBDL.PSH@flex--lokeshgidra.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706293613; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Ea7Hw9hfzKHPSICkBFLdXLd3vxRezHDhBtPOMvwn2Cw=; b=n5jgKhRSXN3iYs0YadYLJussO5A+p/JVu7dFJNiAKl4Q/exy68UX59MdClXaUqMV0Fa81u XPOlP1Inw1jrO9OCwzyss/UL3cgzrg9YuNvU3VHRorpEO2z2W5FcDvOCUml+9tMSd65QoH Wzv23uzp9Mau0QuMhCeHpfbshc1016s= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=JTVSD6KL; spf=pass (imf09.hostedemail.com: domain of 3bPmzZQsKCGMMPLFTIHJESBHPPHMF.DPNMJOVY-NNLWBDL.PSH@flex--lokeshgidra.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=3bPmzZQsKCGMMPLFTIHJESBHPPHMF.DPNMJOVY-NNLWBDL.PSH@flex--lokeshgidra.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706293613; a=rsa-sha256; cv=none; b=p5adRtT9PI9Czbq0QAUuRMru09GK5zn+Bye7wrJxMu+/ZMRxsz9+OMt+5c1v7OfSadgK6Q qBVCyu8V6hF9tuWET7yZPzKh3YehYAGcyiigInqA79D5RQVXpAmOQG5bNdcABSNiVFWt7z Lv7Di1OpIlASa6ZproLJQC1HYi+OZa4= Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-dc6477d2cbcso784637276.3 for ; Fri, 26 Jan 2024 10:26:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1706293612; x=1706898412; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Ea7Hw9hfzKHPSICkBFLdXLd3vxRezHDhBtPOMvwn2Cw=; b=JTVSD6KL3DA5/y7JxCej+1gdq2CA3UXQxMXLi9dlG5OLvT41C+uD0W0zuNGo9g05Zx F+AlMdhuRXBuM7NHpi/kxOjKRlx+P9nswz2QkDVVG0O2lFNVteiOnxbO+1jI/zb0z0ZF MqdPRq7ejgdJ6bSzoadSaKrrnLr+xZA1oXNCtxlI79bHD0iCUqyHgQ1mSprGV+c1i6mj M6INu5eNg+o6YyESmPpHKX6PKi+qjYUVnyaqKX0cqwuwWHTd6q8KKzX+TK1e099ZDSbD CdMXbjQOlYfMsn0rIVKGXoc+Aw16F6vghQ2EnOem+kqVrwZnG5EWM+GUNMd4OhlHvLuO StVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706293612; x=1706898412; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Ea7Hw9hfzKHPSICkBFLdXLd3vxRezHDhBtPOMvwn2Cw=; b=rKUwaQXsaVI58L+mV2MaMsh3WTQ8S0YqZBD5d5M2MsieQzQ72ZL/2nJnX+6+8lx1qJ x+u3AVEy6Eb64ajIxuVKIsuElUp+YPEVhnKc9vEqxVeQXebsHkYaAmdhe3egPr8EUB31 ylGhA9ubQPsmvKNXXL9NcpPSODNPMeutX30zfLiGVwfuG2dkE7kamCKVYSvV97Opw0Wz Fu8A8Y3yWKwohnF7ghYAFhvYNUH8qdfCJ7kaRUx8gT115DjKsnrhmPrQc+36NkG3nsuy OoCGyrcdYqGL9ie5Gp6160cVVSG4cZGMDUggH3YhuzHJXulUneBMOENNbS7F8p0JFssE EW4A== X-Gm-Message-State: AOJu0YzKc7yDKHsjPI1tPUsFqs5koelZvq8M9fVspNPae7A84fzwnhaR YFWCdJprGdbmKQEMSy/fqOdlt4G8mLqXVb04EHZ57G064z2Bx6TYzZAOeC7RevxJ8AOiJBCUxpO jl3Mp/AMzEXoZvjK2vLCYIg== X-Google-Smtp-Source: AGHT+IF6b/zzdWRdaD9mx8VCovar1yNXSpVCRBukGvbXPdeSWF1fsLGgLA1E9pMIMedYMGlUa4Z87VLDG2j7Fxza5g== X-Received: from lg.mtv.corp.google.com ([2620:15c:211:202:cc8a:c6c9:a475:ebf]) (user=lokeshgidra job=sendgmr) by 2002:a05:6902:2307:b0:dbe:30cd:8fcb with SMTP id do7-20020a056902230700b00dbe30cd8fcbmr53921ybb.0.1706293612561; Fri, 26 Jan 2024 10:26:52 -0800 (PST) Date: Fri, 26 Jan 2024 10:26:46 -0800 In-Reply-To: <20240126182647.2748949-1-lokeshgidra@google.com> Mime-Version: 1.0 References: <20240126182647.2748949-1-lokeshgidra@google.com> X-Mailer: git-send-email 2.43.0.429.g432eaa2c6b-goog Message-ID: <20240126182647.2748949-2-lokeshgidra@google.com> Subject: [PATCH 2/3] userfaultfd: protect mmap_changing with rw_sem in userfaulfd_ctx From: Lokesh Gidra To: akpm@linux-foundation.org Cc: lokeshgidra@google.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, selinux@vger.kernel.org, surenb@google.com, kernel-team@android.com, aarcange@redhat.com, peterx@redhat.com, david@redhat.com, axelrasmussen@google.com, bgeffon@google.com, willy@infradead.org, jannh@google.com, kaleshsingh@google.com, ngeoffray@google.com, timmurray@google.com, rppt@kernel.org X-Rspamd-Queue-Id: 7837014002E X-Rspam-User: X-Stat-Signature: dbpjirrnmoyui8m49qdirrdkwhhhtbpp X-Rspamd-Server: rspam01 X-HE-Tag: 1706293613-346137 X-HE-Meta: U2FsdGVkX18R0Gu+Us61V8uwP0OhzlYehleC1u8zCxqXX9K66mf7clFTKXHC61vnC5SCdkRTxoO9lysmjReSfBooG770/wpJgNJD/mNKoqRC0VUGb+35OGrFav5NlMwtEdnX1IKpE58ie8mCn6qhkPBC89dP6gbhWVXgQ8yatnruVNHaig1V5Op41IDq8QqwASiTH/aU7C5/69Z/Ixks1A5KXWcYww0g44qZuoiBqLns5owRjgAahUKNcgfEek+TzjIrRCKsld28f4Gfr7G4rrMt8cR41hLTK3WL2RphhI/AwrB4vsEFMAnahdgp6iXlTP6bsNEXQaXtyn+8JJPCLqeVjxgRtmSlwvFsMJBWI9S4FrHuwWcB+29jms7oQhUCDMRjIc1tICwRYtmkd4ZIe8HEkQ6POsHKBFEZSmL5W/cnzTsqPC6s7Jk4CBEHqfWJIlTaHdex2DfvzEBRipLg4HUaHoJttWkaTN9ZKbVLQ0gVan8XHN4JCoKiNVortg54IUmtluOwJgZPjUAc8xTVUjjmMErg6HI24AFsUVe6fNIbYg/XAq+64Nr747neyTJpV6BrTYQBkp21Rqs8beO1OYYqbmstbemwLkpX/dohTyCgGUrd0suQGVMwOJcjn4+zY7Jt2+7BtpmwZiwhBHu01KgifR/A3wigBDaqAHaEQ5yFL8IGh3n6J9+X2RAGvGB5byd3Ic9YPgdZ5y5dLFfI0zjWpCjyZx2F2saDhRpnlaYge5+nVwO2yXt60M45YlBuBVKWuymCRSjYvoeXXSaDoyUO78KjQR5R6mWRs1nbmb9WlbQ+SKPhRCjeUzgOeGOHhTlmay8rTW25kWcSnOJnaKrPYjkUOkV9T8K+v59hyjLuYJkw1GS13WnCdB54ea3gIJr/p+XibQ+NLFwKscSrsIe/tfYxEgJbP1LMrpMOEJqsN7WinCERccryDYCfu+9CheKzxqscB6vHZWE1Oph VqMV4PIK zORTzPGsJlHmPL2pzk+ULJk0TjOH2O2JTt/gncLTDcV7c485Jnqn/ZQ2+1soznLDtiUd1LDgNWQdrHbGJVOiCcOIdJIjoUS9pCcwocg4DMEbUdbRlahkPoHbYhsckLFr5VK5ZWrwy3Y+Jy0ps1cKwNqsadBNA6Zldyf+C0LrQJk7mSr8HxwrjqY1pDLOyiL30Y7358sju5fLZ3IxLn95U91Hio8wjg7ZKfws3lyYIL898mM7Cm3XdPIp4EsmLJKCxS7VfGJcCnVIt7c31oCAWEUO9Vsd8i8uW3u/wGS00LGN0P5BCkuLDNnpWaZVEBzCRsFl07eUe/leDYoX7LQD6pF12GeOnUCq04Efg/MeT5IAwig8mzEB7PStZeWyiiPjhDG3Xh/MJsBeNHTs+F+7MdTYn0Qw8G9fILhvURV3n4UXKRHeRlFwxuGWrsL48ajFE3ufuI5x1qY5R/OQS1phyprFhayWL3SJwU4ffHiiMhfSwkk9z8m73CXYpI/TTvZe61yLXw2elW+FeqYQaqdVkFj8ynG/Yz7fvgXU0oBTtzUhf5af4nlSKYDWyrIesktt6xrZ6R6rDIXGDl6M5o6jX9MTgtvrV0t7Hai5W4zEFsVuh1k9DtPkdgRqLOAxim2afW8n2f+7W0dxn6S1rb1g1QwBQKR2gKm2J2Z8/uRlwgGy3sklpGu2xxKUung== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Increments and loads to mmap_changing are always in mmap_lock critical section. This ensures that if userspace requests event notification for non-cooperative operations (e.g. mremap), userfaultfd operations don't occur concurrently. This can be achieved by using a separate read-write semaphore in userfaultfd_ctx such that increments are done in write-mode and loads in read-mode, thereby eliminating the dependency on mmap_lock for this purpose. This is a preparatory step before we replace mmap_lock usage with per-vma locks in fill/move ioctls. Signed-off-by: Lokesh Gidra --- fs/userfaultfd.c | 39 ++++++++++++++---------- include/linux/userfaultfd_k.h | 31 ++++++++++--------- mm/userfaultfd.c | 56 +++++++++++++++++++++-------------- 3 files changed, 73 insertions(+), 53 deletions(-) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index af5ebaad2f1d..5aaf248d3107 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -685,12 +685,15 @@ int dup_userfaultfd(struct vm_area_struct *vma, struct list_head *fcs) ctx->flags = octx->flags; ctx->features = octx->features; ctx->released = false; + init_rwsem(&ctx->map_changing_lock); atomic_set(&ctx->mmap_changing, 0); ctx->mm = vma->vm_mm; mmgrab(ctx->mm); userfaultfd_ctx_get(octx); + down_write(&octx->map_changing_lock); atomic_inc(&octx->mmap_changing); + up_write(&octx->map_changing_lock); fctx->orig = octx; fctx->new = ctx; list_add_tail(&fctx->list, fcs); @@ -737,7 +740,9 @@ void mremap_userfaultfd_prep(struct vm_area_struct *vma, if (ctx->features & UFFD_FEATURE_EVENT_REMAP) { vm_ctx->ctx = ctx; userfaultfd_ctx_get(ctx); + down_write(&ctx->map_changing_lock); atomic_inc(&ctx->mmap_changing); + up_write(&ctx->map_changing_lock); } else { /* Drop uffd context if remap feature not enabled */ vma_start_write(vma); @@ -783,7 +788,9 @@ bool userfaultfd_remove(struct vm_area_struct *vma, return true; userfaultfd_ctx_get(ctx); + down_write(&ctx->map_changing_lock); atomic_inc(&ctx->mmap_changing); + up_write(&ctx->map_changing_lock); mmap_read_unlock(mm); msg_init(&ewq.msg); @@ -825,7 +832,9 @@ int userfaultfd_unmap_prep(struct vm_area_struct *vma, unsigned long start, return -ENOMEM; userfaultfd_ctx_get(ctx); + down_write(&ctx->map_changing_lock); atomic_inc(&ctx->mmap_changing); + up_write(&ctx->map_changing_lock); unmap_ctx->ctx = ctx; unmap_ctx->start = start; unmap_ctx->end = end; @@ -1709,9 +1718,8 @@ static int userfaultfd_copy(struct userfaultfd_ctx *ctx, if (uffdio_copy.mode & UFFDIO_COPY_MODE_WP) flags |= MFILL_ATOMIC_WP; if (mmget_not_zero(ctx->mm)) { - ret = mfill_atomic_copy(ctx->mm, uffdio_copy.dst, uffdio_copy.src, - uffdio_copy.len, &ctx->mmap_changing, - flags); + ret = mfill_atomic_copy(ctx, uffdio_copy.dst, uffdio_copy.src, + uffdio_copy.len, flags); mmput(ctx->mm); } else { return -ESRCH; @@ -1761,9 +1769,8 @@ static int userfaultfd_zeropage(struct userfaultfd_ctx *ctx, goto out; if (mmget_not_zero(ctx->mm)) { - ret = mfill_atomic_zeropage(ctx->mm, uffdio_zeropage.range.start, - uffdio_zeropage.range.len, - &ctx->mmap_changing); + ret = mfill_atomic_zeropage(ctx, uffdio_zeropage.range.start, + uffdio_zeropage.range.len); mmput(ctx->mm); } else { return -ESRCH; @@ -1818,9 +1825,8 @@ static int userfaultfd_writeprotect(struct userfaultfd_ctx *ctx, return -EINVAL; if (mmget_not_zero(ctx->mm)) { - ret = mwriteprotect_range(ctx->mm, uffdio_wp.range.start, - uffdio_wp.range.len, mode_wp, - &ctx->mmap_changing); + ret = mwriteprotect_range(ctx, uffdio_wp.range.start, + uffdio_wp.range.len, mode_wp); mmput(ctx->mm); } else { return -ESRCH; @@ -1870,9 +1876,8 @@ static int userfaultfd_continue(struct userfaultfd_ctx *ctx, unsigned long arg) flags |= MFILL_ATOMIC_WP; if (mmget_not_zero(ctx->mm)) { - ret = mfill_atomic_continue(ctx->mm, uffdio_continue.range.start, - uffdio_continue.range.len, - &ctx->mmap_changing, flags); + ret = mfill_atomic_continue(ctx, uffdio_continue.range.start, + uffdio_continue.range.len, flags); mmput(ctx->mm); } else { return -ESRCH; @@ -1925,9 +1930,8 @@ static inline int userfaultfd_poison(struct userfaultfd_ctx *ctx, unsigned long goto out; if (mmget_not_zero(ctx->mm)) { - ret = mfill_atomic_poison(ctx->mm, uffdio_poison.range.start, - uffdio_poison.range.len, - &ctx->mmap_changing, 0); + ret = mfill_atomic_poison(ctx, uffdio_poison.range.start, + uffdio_poison.range.len, 0); mmput(ctx->mm); } else { return -ESRCH; @@ -2003,12 +2007,14 @@ static int userfaultfd_move(struct userfaultfd_ctx *ctx, if (mmget_not_zero(mm)) { mmap_read_lock(mm); - /* Re-check after taking mmap_lock */ + /* Re-check after taking map_changing_lock */ + down_read(&ctx->map_changing_lock); if (likely(!atomic_read(&ctx->mmap_changing))) ret = move_pages(ctx, mm, uffdio_move.dst, uffdio_move.src, uffdio_move.len, uffdio_move.mode); else ret = -EINVAL; + up_read(&ctx->map_changing_lock); mmap_read_unlock(mm); mmput(mm); @@ -2216,6 +2222,7 @@ static int new_userfaultfd(int flags) ctx->flags = flags; ctx->features = 0; ctx->released = false; + init_rwsem(&ctx->map_changing_lock); atomic_set(&ctx->mmap_changing, 0); ctx->mm = current->mm; /* prevent the mm struct to be freed */ diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h index 691d928ee864..3210c3552976 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -69,6 +69,13 @@ struct userfaultfd_ctx { unsigned int features; /* released */ bool released; + /* + * Prevents userfaultfd operations (fill/move/wp) from happening while + * some non-cooperative event(s) is taking place. Increments are done + * in write-mode. Whereas, userfaultfd operations, which includes + * reading mmap_changing, is done under read-mode. + */ + struct rw_semaphore map_changing_lock; /* memory mappings are changing because of non-cooperative event */ atomic_t mmap_changing; /* mm with one ore more vmas attached to this userfaultfd_ctx */ @@ -113,22 +120,18 @@ extern int mfill_atomic_install_pte(pmd_t *dst_pmd, unsigned long dst_addr, struct page *page, bool newly_allocated, uffd_flags_t flags); -extern ssize_t mfill_atomic_copy(struct mm_struct *dst_mm, unsigned long dst_start, +extern ssize_t mfill_atomic_copy(struct userfaultfd_ctx *ctx, unsigned long dst_start, unsigned long src_start, unsigned long len, - atomic_t *mmap_changing, uffd_flags_t flags); -extern ssize_t mfill_atomic_zeropage(struct mm_struct *dst_mm, + uffd_flags_t flags); +extern ssize_t mfill_atomic_zeropage(struct userfaultfd_ctx *ctx, unsigned long dst_start, - unsigned long len, - atomic_t *mmap_changing); -extern ssize_t mfill_atomic_continue(struct mm_struct *dst_mm, unsigned long dst_start, - unsigned long len, atomic_t *mmap_changing, - uffd_flags_t flags); -extern ssize_t mfill_atomic_poison(struct mm_struct *dst_mm, unsigned long start, - unsigned long len, atomic_t *mmap_changing, - uffd_flags_t flags); -extern int mwriteprotect_range(struct mm_struct *dst_mm, - unsigned long start, unsigned long len, - bool enable_wp, atomic_t *mmap_changing); + unsigned long len); +extern ssize_t mfill_atomic_continue(struct userfaultfd_ctx *ctx, unsigned long dst_start, + unsigned long len, uffd_flags_t flags); +extern ssize_t mfill_atomic_poison(struct userfaultfd_ctx *ctx, unsigned long start, + unsigned long len, uffd_flags_t flags); +extern int mwriteprotect_range(struct userfaultfd_ctx *ctx, unsigned long start, + unsigned long len, bool enable_wp); extern long uffd_wp_range(struct vm_area_struct *vma, unsigned long start, unsigned long len, bool enable_wp); diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 20e3b0d9cf7e..a66b4d62a361 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -353,6 +353,7 @@ static pmd_t *mm_alloc_pmd(struct mm_struct *mm, unsigned long address) * called with mmap_lock held, it will release mmap_lock before returning. */ static __always_inline ssize_t mfill_atomic_hugetlb( + struct userfaultfd_ctx *ctx, struct vm_area_struct *dst_vma, unsigned long dst_start, unsigned long src_start, @@ -378,6 +379,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( * feature is not supported. */ if (uffd_flags_mode_is(flags, MFILL_ATOMIC_ZEROPAGE)) { + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); return -EINVAL; } @@ -462,6 +464,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( cond_resched(); if (unlikely(err == -ENOENT)) { + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); BUG_ON(!folio); @@ -472,6 +475,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( goto out; } mmap_read_lock(dst_mm); + down_read(&ctx->map_changing_lock); dst_vma = NULL; goto retry; @@ -491,6 +495,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( } out_unlock: + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); out: if (folio) @@ -502,7 +507,8 @@ static __always_inline ssize_t mfill_atomic_hugetlb( } #else /* !CONFIG_HUGETLB_PAGE */ /* fail at build time if gcc attempts to use this */ -extern ssize_t mfill_atomic_hugetlb(struct vm_area_struct *dst_vma, +extern ssize_t mfill_atomic_hugetlb(struct userfaultfd_ctx *ctx, + struct vm_area_struct *dst_vma, unsigned long dst_start, unsigned long src_start, unsigned long len, @@ -553,13 +559,13 @@ static __always_inline ssize_t mfill_atomic_pte(pmd_t *dst_pmd, return err; } -static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, +static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, unsigned long dst_start, unsigned long src_start, unsigned long len, - atomic_t *mmap_changing, uffd_flags_t flags) { + struct mm_struct *dst_mm = ctx->mm; struct vm_area_struct *dst_vma; ssize_t err; pmd_t *dst_pmd; @@ -589,8 +595,9 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, * operation (e.g. mremap) running in parallel, bail out and * request the user to retry later */ + down_read(&ctx->map_changing_lock); err = -EAGAIN; - if (mmap_changing && atomic_read(mmap_changing)) + if (atomic_read(&ctx->mmap_changing)) goto out_unlock; /* @@ -622,7 +629,7 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, * If this is a HUGETLB vma, pass off to appropriate routine */ if (is_vm_hugetlb_page(dst_vma)) - return mfill_atomic_hugetlb(dst_vma, dst_start, + return mfill_atomic_hugetlb(ctx, dst_vma, dst_start, src_start, len, flags); if (!vma_is_anonymous(dst_vma) && !vma_is_shmem(dst_vma)) @@ -682,6 +689,7 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, if (unlikely(err == -ENOENT)) { void *kaddr; + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); BUG_ON(!folio); @@ -712,6 +720,7 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, } out_unlock: + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); out: if (folio) @@ -722,34 +731,33 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, return copied ? copied : err; } -ssize_t mfill_atomic_copy(struct mm_struct *dst_mm, unsigned long dst_start, +ssize_t mfill_atomic_copy(struct userfaultfd_ctx *ctx, unsigned long dst_start, unsigned long src_start, unsigned long len, - atomic_t *mmap_changing, uffd_flags_t flags) + uffd_flags_t flags) { - return mfill_atomic(dst_mm, dst_start, src_start, len, mmap_changing, + return mfill_atomic(ctx, dst_start, src_start, len, uffd_flags_set_mode(flags, MFILL_ATOMIC_COPY)); } -ssize_t mfill_atomic_zeropage(struct mm_struct *dst_mm, unsigned long start, - unsigned long len, atomic_t *mmap_changing) +ssize_t mfill_atomic_zeropage(struct userfaultfd_ctx *ctx, + unsigned long start, + unsigned long len) { - return mfill_atomic(dst_mm, start, 0, len, mmap_changing, + return mfill_atomic(ctx, start, 0, len, uffd_flags_set_mode(0, MFILL_ATOMIC_ZEROPAGE)); } -ssize_t mfill_atomic_continue(struct mm_struct *dst_mm, unsigned long start, - unsigned long len, atomic_t *mmap_changing, - uffd_flags_t flags) +ssize_t mfill_atomic_continue(struct userfaultfd_ctx *ctx, unsigned long start, + unsigned long len, uffd_flags_t flags) { - return mfill_atomic(dst_mm, start, 0, len, mmap_changing, + return mfill_atomic(ctx, start, 0, len, uffd_flags_set_mode(flags, MFILL_ATOMIC_CONTINUE)); } -ssize_t mfill_atomic_poison(struct mm_struct *dst_mm, unsigned long start, - unsigned long len, atomic_t *mmap_changing, - uffd_flags_t flags) +ssize_t mfill_atomic_poison(struct userfaultfd_ctx *ctx, unsigned long start, + unsigned long len, uffd_flags_t flags) { - return mfill_atomic(dst_mm, start, 0, len, mmap_changing, + return mfill_atomic(ctx, start, 0, len, uffd_flags_set_mode(flags, MFILL_ATOMIC_POISON)); } @@ -782,10 +790,10 @@ long uffd_wp_range(struct vm_area_struct *dst_vma, return ret; } -int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, - unsigned long len, bool enable_wp, - atomic_t *mmap_changing) +int mwriteprotect_range(struct userfaultfd_ctx *ctx, unsigned long start, + unsigned long len, bool enable_wp) { + struct mm_struct *dst_mm = ctx->mm; unsigned long end = start + len; unsigned long _start, _end; struct vm_area_struct *dst_vma; @@ -809,8 +817,9 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, * operation (e.g. mremap) running in parallel, bail out and * request the user to retry later */ + down_read(&ctx->map_changing_lock); err = -EAGAIN; - if (mmap_changing && atomic_read(mmap_changing)) + if (atomic_read(&ctx->mmap_changing)) goto out_unlock; err = -ENOENT; @@ -839,6 +848,7 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, err = 0; } out_unlock: + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); return err; } From patchwork Fri Jan 26 18:26:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lokesh Gidra X-Patchwork-Id: 13533126 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3A2DDC47422 for ; Fri, 26 Jan 2024 18:26:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 731A96B0082; Fri, 26 Jan 2024 13:26:57 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6C3C66B0085; Fri, 26 Jan 2024 13:26:57 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 559E96B0088; Fri, 26 Jan 2024 13:26:57 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 41E276B0082 for ; Fri, 26 Jan 2024 13:26:57 -0500 (EST) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id F018C120693 for ; Fri, 26 Jan 2024 18:26:56 +0000 (UTC) X-FDA: 81722293632.10.4E08B1C Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) by imf21.hostedemail.com (Postfix) with ESMTP id 33A071C000B for ; Fri, 26 Jan 2024 18:26:55 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=q5KtA1JD; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf21.hostedemail.com: domain of 3bvmzZQsKCGUORNHVKJLGUDJRRJOH.FRPOLQXa-PPNYDFN.RUJ@flex--lokeshgidra.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3bvmzZQsKCGUORNHVKJLGUDJRRJOH.FRPOLQXa-PPNYDFN.RUJ@flex--lokeshgidra.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706293615; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=uLXJpcEgAvHLQ8+bquI5HaOs2SY4k/xeWqEkBYhbiw0=; b=BrB5xmAZFNsR4ZuXVXn5wGQurOsD9gHZcyjcEmk4soJYvUs/ZSSIOGpAicBf0yrqriffbn s5cC/ZG6sxMzqzLkEPZukExcefVVLG6aJ82CneoRu5xnQAbvlJaAMwgIIs37FeM77xLxvA rsSwXrYJvzffWkBbuvgNPLxRo66nmec= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=q5KtA1JD; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf21.hostedemail.com: domain of 3bvmzZQsKCGUORNHVKJLGUDJRRJOH.FRPOLQXa-PPNYDFN.RUJ@flex--lokeshgidra.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3bvmzZQsKCGUORNHVKJLGUDJRRJOH.FRPOLQXa-PPNYDFN.RUJ@flex--lokeshgidra.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706293615; a=rsa-sha256; cv=none; b=hPJ3MS3tJYQlggz1zBfSRVLs1VIylTrDWWx6QrMupMldivHhYc9RAJsLFNn1wc6QuSEdik 0CWed8HDUelJxVA9OMzhW41tQ4g3UsKcbud5BsBT1RC7BZ2h1kp4d7z+AgbUjkRkvNYJ+F hMTp3IKgHcMFLB9/7Ih0OP30YkOffZg= Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-5f38d676cecso23452207b3.0 for ; Fri, 26 Jan 2024 10:26:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1706293614; x=1706898414; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=uLXJpcEgAvHLQ8+bquI5HaOs2SY4k/xeWqEkBYhbiw0=; b=q5KtA1JDFhYQXlDrMZ2jPIEcSmwwAvQsm1KscEJPKsoKL0/K9Pj5ZtGpqMxStMm+Wz DEVtmLD2NaKc4LJgR3TcPn9vD8UYuAuGcnbp6vUR1y7OHvSlfCy/gGawV9+IOP2EBjpJ SVhMbOncEz92s9BJiUw0gPhigbnVGP+uKuOnW9IaSpLRUOBLpJhAn9IkQ6onZOFOFKQA ZuvdIi/jK2qyZyhlSKwCNAKk1tQqwmhqU/zhWiYsejYo3GjP6Q/gQj29CLRumJi+BnH/ lsr8K4ncBVdRVSIz7uA0NLx9S66WhuFhue9wYvYtR+RrvSfGCO8ra3wNkW7uAcYyHkio UYSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706293614; x=1706898414; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=uLXJpcEgAvHLQ8+bquI5HaOs2SY4k/xeWqEkBYhbiw0=; b=j8ubn9JMwLz8RZV39sVz2p/CA71PqrPI46sA6qDzc+WMzb7x4ONQu8/Qko2xU/wNkr SuO4Czpl+RscHFZzkIO5ltbYgr5TLeeZH1pLmqsWMZqXus2M88QAguubkPrQLnfDYbYX NBNhNP+fzGqRy7qzOULq3A2H4HpahYQUXayfYDbQQ6aH7SFTZLSK/a9thQ4YJgAOIK5X h+TbReqMfuTZ3+7ZZSNVFR1XVQLCJvkoyHQH4YCfJY7aROBBDl+bSa7LpemZuo3wBPXu EVhronQ685LnCDHtAhfntFlNUPWbkb33xAJUlpqBKVkC90zFc/FZXEqQE6ZZ2kaQcHxd yzAQ== X-Gm-Message-State: AOJu0Yyekwtkc7mSIWzOrTPmnkf4afIUIVAJsY5WS+2jMv7HNRVcwt8G ii4rWN3sk7JZGpckX6PPV471ujIKRhcY0wdh6skpkCSI+8yrqF+sFR7xJ/nbbnx5ZH+LDcXEa4P f0sMZVHsw8U/yfXP4I/s3Qg== X-Google-Smtp-Source: AGHT+IGt2/BA8dj3t3+f3CDFRNY8WvXgtUOL2IcXoo5TOnIKymbSZPVcMF8fo+tUfgFLJ3Qz79dJcQPFBkAu0Cw8KQ== X-Received: from lg.mtv.corp.google.com ([2620:15c:211:202:cc8a:c6c9:a475:ebf]) (user=lokeshgidra job=sendgmr) by 2002:a81:4cd3:0:b0:5fc:7f94:da64 with SMTP id z202-20020a814cd3000000b005fc7f94da64mr95373ywa.5.1706293614376; Fri, 26 Jan 2024 10:26:54 -0800 (PST) Date: Fri, 26 Jan 2024 10:26:47 -0800 In-Reply-To: <20240126182647.2748949-1-lokeshgidra@google.com> Mime-Version: 1.0 References: <20240126182647.2748949-1-lokeshgidra@google.com> X-Mailer: git-send-email 2.43.0.429.g432eaa2c6b-goog Message-ID: <20240126182647.2748949-3-lokeshgidra@google.com> Subject: [PATCH 3/3] userfaultfd: use per-vma locks in userfaultfd operations From: Lokesh Gidra To: akpm@linux-foundation.org Cc: lokeshgidra@google.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, selinux@vger.kernel.org, surenb@google.com, kernel-team@android.com, aarcange@redhat.com, peterx@redhat.com, david@redhat.com, axelrasmussen@google.com, bgeffon@google.com, willy@infradead.org, jannh@google.com, kaleshsingh@google.com, ngeoffray@google.com, timmurray@google.com, rppt@kernel.org X-Rspamd-Queue-Id: 33A071C000B X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: ekixe9b1m5do3pwn4ruraab3ief1rcjt X-HE-Tag: 1706293615-17125 X-HE-Meta: U2FsdGVkX1/sKaAaWDMAfOgDrZZrixlytK6T0gi47pGjrsnCW5iStYnoXoUpnbiHAKdsiEsHW00V5c0tDrTqwwa2toQ1d9eey36V50UCM8rFcrsNTbuPBka6BAe82elKpnPrCG7WjlzhXoNidlFBg5eLLfGKeZlfT40eG6+p3YC1+smpILQ+6VKzz+IiJ6loe+zqkqC0+oHx+frOpzQwTzVmDNYZoCCschz3ho0teQj11tT7KtNlGllUq5DeTn/upgEDyMMXyJEEx622ALdK7ip62CNcoBol64bo6WMqKWFPn3p2ZqA5mBqsjrmglB/eC8BejEG7mrNEzE7iNlPL5ZgnKM1erQlJAJZO+w38ynLJDOfyQa7DIp3pXG1lqEhPkjE6fMM1XXM3wGup3gDX70Ni0u208W9H/1DcwpPxjA3oQ5dbmvzFkDNFVqbZbcT1M9+rGUOXFAK+5JJcHndAR7iTUJlNumbKuTiqdaUm4Sj3G35OjQxD/dML7JeQw7YX01ND3fMegPRM7GDlN9qAalQqWXzssb4DULQQ27ZjN85l5nvZo9XNM+qeWz6p+D4otm5oulUapWm1odaLRa80tiRywfvMIEFqKGoEpkNhCN4wybcmCMdE0usGrEuw6YA3WjYmHdnIX4ferAnZR2Aiig9mFe6DqVaeAEEtL9wQAFlvVcU4dHuFypeVijbtwGnP7r7g/aeSVOzJoJWvYRzCVPZqRwLXnuPQxPjEW1ZZgYtQJRJVfiVJGaV4FCDdZtaaLKyeWCJ3tLdelr7OD2Wvy8+34QP1e6lqdGP+W5cfcJpEU+akbrLroEfBvuJDbMLBOO15bA6DAbW14WdISLZ03LtWFaWKP5rNYzFUtb15HYOCL5px5Zv/6yAZmmb6i6faOjngpJSCkNPzx20LSCZd8UiP1GgKnrnvIAzX0VRAAvhM9xicaCELu4JcMXNzNUvArmkfZiTV3NujT0TQ6gD wqk7kIVJ v0VkXKmEB6ttGjycmAzgGWgUuXnSiE4MPj0kc1qhv2umGMJi7khJSQxJSoT/qu6DSIksD6g44yL60GxeDQ2whM+H70MxXb6/4wdu/Zpb2Mry23c4xY9E8TPqWKz1N8gD3t3XQXSR7HaeojL9r7rhBnaqw6rhIGLEqIoLCt1/ebpPIw3EfzsZ3rfvzig58EqFshFRBQBsZhYqOlt/Li8Z0dpPkTj7xjXLnKJ59wvdXMVK638vzKfPYc7yQeLo1uA6bziW2qWoAlRqxkZQemtnk1epdOQk8/LYRGMwWZeuZFEsCjkHKXIA6Y5yLH3+UzFfLY+t2Cy3R3bD79Xunt12LL8BYMGYEMcanpv+6k0s6sakRg+TqnnKHcYZA6rkGWAaXVHPeP/AaRSBT0CI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Performing userfaultfd operations (like copy/move etc.) in critical section of mmap_lock (read-mode) has shown significant contention on the lock when operations requiring the lock in write-mode are taking place concurrently. We can use per-vma locks instead to significantly reduce the contention issue. All userfaultfd operations, except write-protect, opportunistically use per-vma locks to lock vmas. Write-protect operation requires mmap_lock as it iterates over multiple vmas. Signed-off-by: Lokesh Gidra --- fs/userfaultfd.c | 14 +---- mm/userfaultfd.c | 160 ++++++++++++++++++++++++++++++++++------------- 2 files changed, 117 insertions(+), 57 deletions(-) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 5aaf248d3107..faa10ed3788f 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -2005,18 +2005,8 @@ static int userfaultfd_move(struct userfaultfd_ctx *ctx, return -EINVAL; if (mmget_not_zero(mm)) { - mmap_read_lock(mm); - - /* Re-check after taking map_changing_lock */ - down_read(&ctx->map_changing_lock); - if (likely(!atomic_read(&ctx->mmap_changing))) - ret = move_pages(ctx, mm, uffdio_move.dst, uffdio_move.src, - uffdio_move.len, uffdio_move.mode); - else - ret = -EINVAL; - up_read(&ctx->map_changing_lock); - - mmap_read_unlock(mm); + ret = move_pages(ctx, mm, uffdio_move.dst, uffdio_move.src, + uffdio_move.len, uffdio_move.mode); mmput(mm); } else { return -ESRCH; diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index a66b4d62a361..9be643308f05 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -19,20 +19,39 @@ #include #include "internal.h" -static __always_inline -struct vm_area_struct *find_dst_vma(struct mm_struct *dst_mm, - unsigned long dst_start, - unsigned long len) +void unpin_vma(struct mm_struct *mm, struct vm_area_struct *vma, bool *mmap_locked) +{ + BUG_ON(!vma && !*mmap_locked); + + if (*mmap_locked) { + mmap_read_unlock(mm); + *mmap_locked = false; + } else + vma_end_read(vma); +} + +/* + * Search for VMA and make sure it is stable either by locking it or taking + * mmap_lock. + */ +struct vm_area_struct *find_and_pin_dst_vma(struct mm_struct *dst_mm, + unsigned long dst_start, + unsigned long len, + bool *mmap_locked) { + struct vm_area_struct *dst_vma = lock_vma_under_rcu(dst_mm, dst_start); + if (!dst_vma) { + mmap_read_lock(dst_mm); + *mmap_locked = true; + dst_vma = find_vma(dst_mm, dst_start); + } + /* * Make sure that the dst range is both valid and fully within a * single existing vma. */ - struct vm_area_struct *dst_vma; - - dst_vma = find_vma(dst_mm, dst_start); if (!range_in_vma(dst_vma, dst_start, dst_start + len)) - return NULL; + goto unpin; /* * Check the vma is registered in uffd, this is required to @@ -40,9 +59,13 @@ struct vm_area_struct *find_dst_vma(struct mm_struct *dst_mm, * time. */ if (!dst_vma->vm_userfaultfd_ctx.ctx) - return NULL; + goto unpin; return dst_vma; + +unpin: + unpin_vma(dst_mm, dst_vma, mmap_locked); + return NULL; } /* Check if dst_addr is outside of file's size. Must be called with ptl held. */ @@ -350,7 +373,8 @@ static pmd_t *mm_alloc_pmd(struct mm_struct *mm, unsigned long address) #ifdef CONFIG_HUGETLB_PAGE /* * mfill_atomic processing for HUGETLB vmas. Note that this routine is - * called with mmap_lock held, it will release mmap_lock before returning. + * called with either vma-lock or mmap_lock held, it will release the lock + * before returning. */ static __always_inline ssize_t mfill_atomic_hugetlb( struct userfaultfd_ctx *ctx, @@ -358,7 +382,8 @@ static __always_inline ssize_t mfill_atomic_hugetlb( unsigned long dst_start, unsigned long src_start, unsigned long len, - uffd_flags_t flags) + uffd_flags_t flags, + bool *mmap_locked) { struct mm_struct *dst_mm = dst_vma->vm_mm; int vm_shared = dst_vma->vm_flags & VM_SHARED; @@ -380,7 +405,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( */ if (uffd_flags_mode_is(flags, MFILL_ATOMIC_ZEROPAGE)) { up_read(&ctx->map_changing_lock); - mmap_read_unlock(dst_mm); + unpin_vma(dst_mm, dst_vma, mmap_locked); return -EINVAL; } @@ -404,12 +429,25 @@ static __always_inline ssize_t mfill_atomic_hugetlb( */ if (!dst_vma) { err = -ENOENT; - dst_vma = find_dst_vma(dst_mm, dst_start, len); - if (!dst_vma || !is_vm_hugetlb_page(dst_vma)) - goto out_unlock; + dst_vma = find_and_pin_dst_vma(dst_mm, dst_start, + len, mmap_locked); + if (!dst_vma) + goto out; + if (!is_vm_hugetlb_page(dst_vma)) + goto out_unlock_vma; err = -EINVAL; if (vma_hpagesize != vma_kernel_pagesize(dst_vma)) + goto out_unlock_vma; + + /* + * If memory mappings are changing because of non-cooperative + * operation (e.g. mremap) running in parallel, bail out and + * request the user to retry later + */ + down_read(&ctx->map_changing_lock); + err = -EAGAIN; + if (atomic_read(&ctx->mmap_changing)) goto out_unlock; vm_shared = dst_vma->vm_flags & VM_SHARED; @@ -465,7 +503,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( if (unlikely(err == -ENOENT)) { up_read(&ctx->map_changing_lock); - mmap_read_unlock(dst_mm); + unpin_vma(dst_mm, dst_vma, mmap_locked); BUG_ON(!folio); err = copy_folio_from_user(folio, @@ -474,8 +512,6 @@ static __always_inline ssize_t mfill_atomic_hugetlb( err = -EFAULT; goto out; } - mmap_read_lock(dst_mm); - down_read(&ctx->map_changing_lock); dst_vma = NULL; goto retry; @@ -496,7 +532,8 @@ static __always_inline ssize_t mfill_atomic_hugetlb( out_unlock: up_read(&ctx->map_changing_lock); - mmap_read_unlock(dst_mm); +out_unlock_vma: + unpin_vma(dst_mm, dst_vma, mmap_locked); out: if (folio) folio_put(folio); @@ -512,7 +549,8 @@ extern ssize_t mfill_atomic_hugetlb(struct userfaultfd_ctx *ctx, unsigned long dst_start, unsigned long src_start, unsigned long len, - uffd_flags_t flags); + uffd_flags_t flags, + bool *mmap_locked); #endif /* CONFIG_HUGETLB_PAGE */ static __always_inline ssize_t mfill_atomic_pte(pmd_t *dst_pmd, @@ -572,6 +610,7 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, unsigned long src_addr, dst_addr; long copied; struct folio *folio; + bool mmap_locked = false; /* * Sanitize the command parameters: @@ -588,7 +627,14 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, copied = 0; folio = NULL; retry: - mmap_read_lock(dst_mm); + /* + * Make sure the vma is not shared, that the dst range is + * both valid and fully within a single existing vma. + */ + err = -ENOENT; + dst_vma = find_and_pin_dst_vma(dst_mm, dst_start, len, &mmap_locked); + if (!dst_vma) + goto out; /* * If memory mappings are changing because of non-cooperative @@ -600,15 +646,6 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, if (atomic_read(&ctx->mmap_changing)) goto out_unlock; - /* - * Make sure the vma is not shared, that the dst range is - * both valid and fully within a single existing vma. - */ - err = -ENOENT; - dst_vma = find_dst_vma(dst_mm, dst_start, len); - if (!dst_vma) - goto out_unlock; - err = -EINVAL; /* * shmem_zero_setup is invoked in mmap for MAP_ANONYMOUS|MAP_SHARED but @@ -629,8 +666,8 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, * If this is a HUGETLB vma, pass off to appropriate routine */ if (is_vm_hugetlb_page(dst_vma)) - return mfill_atomic_hugetlb(ctx, dst_vma, dst_start, - src_start, len, flags); + return mfill_atomic_hugetlb(ctx, dst_vma, dst_start, src_start + len, flags, &mmap_locked); if (!vma_is_anonymous(dst_vma) && !vma_is_shmem(dst_vma)) goto out_unlock; @@ -690,7 +727,8 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, void *kaddr; up_read(&ctx->map_changing_lock); - mmap_read_unlock(dst_mm); + unpin_vma(dst_mm, dst_vma, &mmap_locked); + BUG_ON(!folio); kaddr = kmap_local_folio(folio, 0); @@ -721,7 +759,7 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, out_unlock: up_read(&ctx->map_changing_lock); - mmap_read_unlock(dst_mm); + unpin_vma(dst_mm, dst_vma, &mmap_locked); out: if (folio) folio_put(folio); @@ -1243,8 +1281,6 @@ static int validate_move_areas(struct userfaultfd_ctx *ctx, * @len: length of the virtual memory range * @mode: flags from uffdio_move.mode * - * Must be called with mmap_lock held for read. - * * move_pages() remaps arbitrary anonymous pages atomically in zero * copy. It only works on non shared anonymous pages because those can * be relocated without generating non linear anon_vmas in the rmap @@ -1320,6 +1356,7 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, struct mm_struct *mm, pmd_t *src_pmd, *dst_pmd; long err = -EINVAL; ssize_t moved = 0; + bool mmap_locked = false; /* Sanitize the command parameters. */ if (WARN_ON_ONCE(src_start & ~PAGE_MASK) || @@ -1332,28 +1369,52 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, struct mm_struct *mm, WARN_ON_ONCE(dst_start + len <= dst_start)) goto out; + dst_vma = NULL; + src_vma = lock_vma_under_rcu(mm, src_start); + if (src_vma) { + dst_vma = lock_vma_under_rcu(mm, dst_start); + if (!dst_vma) + vma_end_read(src_vma); + } + + /* If we failed to lock both VMAs, fall back to mmap_lock */ + if (!dst_vma) { + mmap_read_lock(mm); + mmap_locked = true; + src_vma = find_vma(mm, src_start); + if (!src_vma) + goto out_unlock_mmap; + dst_vma = find_vma(mm, dst_start); + if (!dst_vma) + goto out_unlock_mmap; + } + + /* Re-check after taking map_changing_lock */ + down_read(&ctx->map_changing_lock); + if (likely(atomic_read(&ctx->mmap_changing))) { + err = -EAGAIN; + goto out_unlock; + } /* * Make sure the vma is not shared, that the src and dst remap * ranges are both valid and fully within a single existing * vma. */ - src_vma = find_vma(mm, src_start); - if (!src_vma || (src_vma->vm_flags & VM_SHARED)) - goto out; + if (src_vma->vm_flags & VM_SHARED) + goto out_unlock; if (src_start < src_vma->vm_start || src_start + len > src_vma->vm_end) - goto out; + goto out_unlock; - dst_vma = find_vma(mm, dst_start); - if (!dst_vma || (dst_vma->vm_flags & VM_SHARED)) - goto out; + if (dst_vma->vm_flags & VM_SHARED) + goto out_unlock; if (dst_start < dst_vma->vm_start || dst_start + len > dst_vma->vm_end) - goto out; + goto out_unlock; err = validate_move_areas(ctx, src_vma, dst_vma); if (err) - goto out; + goto out_unlock; for (src_addr = src_start, dst_addr = dst_start; src_addr < src_start + len;) { @@ -1475,6 +1536,15 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, struct mm_struct *mm, moved += step_size; } +out_unlock: + up_read(&ctx->map_changing_lock); +out_unlock_mmap: + if (mmap_locked) + mmap_read_unlock(mm); + else { + vma_end_read(dst_vma); + vma_end_read(src_vma); + } out: VM_WARN_ON(moved < 0); VM_WARN_ON(err > 0);