From patchwork Sat Sep 23 01:31:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13396495 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F662CE7A81 for ; Sat, 23 Sep 2023 01:31:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 807566B028F; Fri, 22 Sep 2023 21:31:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7B6C76B02A9; Fri, 22 Sep 2023 21:31:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 630B76B02B5; Fri, 22 Sep 2023 21:31:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 530856B028F for ; Fri, 22 Sep 2023 21:31:57 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 295FB160224 for ; Sat, 23 Sep 2023 01:31:57 +0000 (UTC) X-FDA: 81266135874.28.5AC5A76 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) by imf03.hostedemail.com (Postfix) with ESMTP id 7290D20004 for ; Sat, 23 Sep 2023 01:31:55 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=2V2Y2kYR; spf=pass (imf03.hostedemail.com: domain of 3CkAOZQYKCPElnkXgUZhhZeX.Vhfebgnq-ffdoTVd.hkZ@flex--surenb.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3CkAOZQYKCPElnkXgUZhhZeX.Vhfebgnq-ffdoTVd.hkZ@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1695432715; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=CmlRLzvRmqHtTQ7XfVaULJQdpUmF9g0LEd2VGVtMlKs=; b=mpvDDyeptyWdIPayvPgRQMMbj3HP3tV4zlks8hVLaA6mGskKHBoFCe5xeQWh4dkA+vggoh 5X561TJZpI+o/omjel7uLV0E2FZbJF9JbXnmdXNHaU7z//9X3tSrxvrGrL+by729Joet2h u9A7MwYGh6EpvyryEVlZ3GqohOG98lI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1695432715; a=rsa-sha256; cv=none; b=X1jWJHvUXuMVT7tVGmQdq5kFbq0CxHOfxDdkJr4mbkdyraWs4NsYbyGNYTgM5J/O0LPV3x W0BEZk69/+ZKXRyDeBUoBa5x5TPQ3LCFmWsGZfx9Lp4/lEeRom7/xSVjxrQMS6PZWDsEOq JFCh1NpBfiQ9BFHZCO+UTWHn+40eIB8= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=2V2Y2kYR; spf=pass (imf03.hostedemail.com: domain of 3CkAOZQYKCPElnkXgUZhhZeX.Vhfebgnq-ffdoTVd.hkZ@flex--surenb.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3CkAOZQYKCPElnkXgUZhhZeX.Vhfebgnq-ffdoTVd.hkZ@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-59beb3a8291so47974807b3.1 for ; Fri, 22 Sep 2023 18:31:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695432714; x=1696037514; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=CmlRLzvRmqHtTQ7XfVaULJQdpUmF9g0LEd2VGVtMlKs=; b=2V2Y2kYRoXnGCScjOuJbyz3c+CzM6QxIvMn1JeX0IsWw4CmR1R/nj5hJPV8uwJqgJV pdQi2HH45+khpY05oAbI9xVD3iFddEW3c0LHfmO764HlYwjyu9M4wQGkC01vHdg1HHY3 y8fssxodpgh577KeWjDKtOPuuJjWtd3uZOSmryjLDTJ+lruji7ycNStLMMj5U9gpS08c 0baEWpFWePbTqA3IvURNilW/ZfcrnZEzF4S3QTugP3akL7I6/oWG7sJfqEw3EGFtYcAU x6eXB8+9nmvbbcJWK0ttk2IOoZ7jtqhmKebMQGv2CcQ0s90DB+45acaeShZ3Qhbv8oyd nJSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695432714; x=1696037514; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=CmlRLzvRmqHtTQ7XfVaULJQdpUmF9g0LEd2VGVtMlKs=; b=w0rpLH8s3DFZApa7oUcoLKhEUcbP+LvCvxJhmda4oJ2eQgy4qLoHtrHHhXOmd+Eg7y oTr9uXypfQUSQxgjNvu78nVpMsm5hs1v3NSUA2t8pG3XV3Q9fjaOd9qDB3kzrzf80DJV BtdtNL5fR0W6/6MtSWkAh30hfR2bQ3q/kBbc2O+c6tHnmN6O5EOGrTmHgaXFc7K1Y12v zK7BnVaGIWtB39c8IC5cRpJf8sHLVtUCMEmJxyWfWobqCrp20TEzP3WWSusbFuckLYgd supD1O/CyFvLGNcckAUUpu2RX687jfk1hYdTnx8Q7FD652RsoYLxO6REMAS9Eo6U+dZM Y9Fw== X-Gm-Message-State: AOJu0YwbY1Tel9Xvb5Hj+qMTxryTD8LFYayFs0T6OKsnR8esqFLvA2f0 /fWkpcS1iFuDU9pucwzWXGXqrEnctvM= X-Google-Smtp-Source: AGHT+IHrox59qxxuPIk8Ga14hkjCpjYszzofG63cVT9NaC90L+xjL83ej6dG0sFL+ZY5m2NWMQmGlw9XCjA= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:3398:6644:ed7f:bec9]) (user=surenb job=sendgmr) by 2002:a25:cb0f:0:b0:d7f:2cb6:7d8c with SMTP id b15-20020a25cb0f000000b00d7f2cb67d8cmr11540ybg.13.1695432714541; Fri, 22 Sep 2023 18:31:54 -0700 (PDT) Date: Fri, 22 Sep 2023 18:31:44 -0700 In-Reply-To: <20230923013148.1390521-1-surenb@google.com> Mime-Version: 1.0 References: <20230923013148.1390521-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.515.g380fc7ccd1-goog Message-ID: <20230923013148.1390521-2-surenb@google.com> Subject: [PATCH v2 1/3] userfaultfd: UFFDIO_REMAP: rmap preparation From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: viro@zeniv.linux.org.uk, brauner@kernel.org, shuah@kernel.org, aarcange@redhat.com, lokeshgidra@google.com, peterx@redhat.com, david@redhat.com, hughd@google.com, mhocko@suse.com, axelrasmussen@google.com, rppt@kernel.org, willy@infradead.org, Liam.Howlett@oracle.com, jannh@google.com, zhangpeng362@huawei.com, bgeffon@google.com, kaleshsingh@google.com, ngeoffray@google.com, jdduke@google.com, surenb@google.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, kernel-team@android.com X-Stat-Signature: 1hd9gim735q7n5tr8x9b99o4tfbccraa X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 7290D20004 X-Rspam-User: X-HE-Tag: 1695432715-214781 X-HE-Meta: U2FsdGVkX18g4QrpS8KpUKQ8uW+4lo4XBpSZyqUoVgsVgOAlpl1QaCeKvFbxT9gKMDsJf6/9/XPx5xJSehETCIrCac1mGglRs1LAGYu0u+x0YoB5hObevFoGhuB0qjS4a7atQpfJjMbpqpqel940X7VZZY2u4L0jhC5GPN/P8Tnc3dFsCRE9Fg2H8IbqduEtcjLjY1hN3R+r1DetZmpyGqcPde1skP443L3Xrd8io4gb2zaIQTZsYf3SQKGH7NOhkVsnHV+0CO3Ms80fJVtFqbJYUh6L+iclnO3OC6XiVg+B6r6jxWasauIeSe5PjGnUXbk4DSX+Th4L3fU8cifexd6yp6SYifF1wN/Rn3eN6YlzQr0bFn+IWywMDc0pbWKrXKVkMC9E8lIoIEje74LbBxAJsvGCqJ60KkWJjZAwnDTnsdEqSxZ3mgSvwIF+GQ/NCVbJVwzBZnNi81lOlw8/Dcs7CMsL0NbF9N2bdkRV0FxfJ8Y1jEh9Nv9XIxJ7XBnwSJil8fifrmRAem1c0O29R4rBfNgot4V++NRGbOdsSUNKaIqQhVRXUGxocLwA6PskWABBltAVHXzSSnMPY/VJO+pj4YNqcjyEeomyo/UU62sQ/ePDN+oF+cskrPnwdFMcm15flWvHPr39FZ+Ybbm6IFN9lFFZ1NxomiTIsD0pI+uQY3FqQEY/m0UTH1hcVIWAZvgv2QPUXFpfgrZUvCqz9Fd+BYuyUd5FoomeAzutmsxQ57ru6r9TFwhsF5/c3Iz3yPM0FvmQ4cDNhIf9zGpLGYN2L2T/iLj1ACbu+OPkPWyaBU/yT/TJPLElhBzdQ2NjHIoVnaiOU0fJunrZx0v05SCrBNwckIx+p6MILE5xxFtnEtbYSGmlRZGxKfFwha8qUALMBVb7skZfhQGm+pCyuZRpHoZETCjcxGIJE7zSu+kjzsWFbTjW9C/F1FxFiUQCY838ywrabR4aicPkOoh yq62jDeF XHuI4XUBszkEvKVBJT4ViV9JNeJgJV72JVPZ6k2DNy/VT7e/MJ07lOTTV4vVY6ypeK6krLUxgIqT9QLeYSi0eCltcjRvPSMiDwBy96DtWlabg6qccYm0PJm6DbgHdS+mauXiCahgA9NmO68q7vjlfObA6tERiG9Az8uMSZ8zzNyDVg0FESfklEGrfd1iiFYWF+ZgYwotFBHA+kPUgeKM7osgT+7k2GeYdlQa2ctAHjdVUVnrTDrjlI9Og6oZO8MsrITKlrt78DcQipa+EgKTOk5k9/br9Uzg9KWUSoIKBTayISBBWxTvASgmmM6UXIT0wF7bdDR+xgdeK4AqyHg4n9Biio5Bq7O9bfBxNGMpJ+1BASOG3AObD1KpU/LzjZfLi8Rm6Fkwp9YphAPBMLOlHPE3W5VSFZDWqRah0KRy/NcLDRyc9an12Iw3198XwGkw16aH15bfGplapdp5VvgNkQfJbZ8Uk+U/RVZFLUmZmIrTn1YFojL3me9eGJ8WhaENzwqdYBJoQQMj8bzOped+amcOxdsF4Pbxv9RLs71+wzGr+i3c+gJZEQtXeTkD25lD3/FnFbhxsOqk+mg6PzfxC81O+lXeoC/+MqgbudfZMOINv948lgoIBA9+n1tyrxRrjgzHSgtVXBlvDbSAE7QMtat1XdinTFR2QpoYs/JX4HhRkwd4O/laW/ky5sg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000385, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrea Arcangeli As far as the rmap code is concerned, UFFDIO_REMAP only alters the page->mapping and page->index. It does it while holding the page lock. However folio_referenced() is doing rmap walks without taking the folio lock first, so folio_lock_anon_vma_read() must be updated to re-check that the folio->mapping didn't change after we obtained the anon_vma read lock. UFFDIO_REMAP takes the anon_vma lock for writing before altering the folio->mapping, so if the folio->mapping is still the same after obtaining the anon_vma read lock (without the folio lock), the rmap walks can go ahead safely (and UFFDIO_REMAP will wait the rmap walk to complete before proceeding). UFFDIO_REMAP serializes against itself with the folio lock. All other places taking the anon_vma lock while holding the mmap_lock for writing, don't need to check if the folio->mapping has changed after taking the anon_vma lock, regardless of the folio lock, because UFFDIO_REMAP holds the mmap_lock for reading. There's one constraint enforced to allow this simplification: the source pages passed to UFFDIO_REMAP must be mapped only in one vma, but this constraint is an acceptable tradeoff for UFFDIO_REMAP users. The source addresses passed to UFFDIO_REMAP can be set as VM_DONTCOPY with MADV_DONTFORK to avoid any risk of the mapcount of the pages increasing if some thread of the process forks() before UFFDIO_REMAP run. Signed-off-by: Andrea Arcangeli Signed-off-by: Suren Baghdasaryan --- mm/rmap.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/mm/rmap.c b/mm/rmap.c index ec7f8e6c9e48..c1ebbd23fa61 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -542,6 +542,7 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio, struct anon_vma *root_anon_vma; unsigned long anon_mapping; +repeat: rcu_read_lock(); anon_mapping = (unsigned long)READ_ONCE(folio->mapping); if ((anon_mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON) @@ -586,6 +587,18 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio, rcu_read_unlock(); anon_vma_lock_read(anon_vma); + /* + * Check if UFFDIO_REMAP changed the anon_vma. This is needed + * because we don't assume the folio was locked. + */ + if (unlikely((unsigned long) READ_ONCE(folio->mapping) != + anon_mapping)) { + anon_vma_unlock_read(anon_vma); + put_anon_vma(anon_vma); + anon_vma = NULL; + goto repeat; + } + if (atomic_dec_and_test(&anon_vma->refcount)) { /* * Oops, we held the last refcount, release the lock From patchwork Sat Sep 23 01:31:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13396497 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00D32CE7A88 for ; Sat, 23 Sep 2023 01:32:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7E25F6B02B5; Fri, 22 Sep 2023 21:32:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 792966B02BF; Fri, 22 Sep 2023 21:32:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 594B46B02C2; Fri, 22 Sep 2023 21:32:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 405706B02B5 for ; Fri, 22 Sep 2023 21:32:00 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 0522E80333 for ; Sat, 23 Sep 2023 01:31:59 +0000 (UTC) X-FDA: 81266136000.08.EAFEB39 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) by imf15.hostedemail.com (Postfix) with ESMTP id 2147FA0036 for ; Sat, 23 Sep 2023 01:31:57 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="axL8/lyQ"; spf=pass (imf15.hostedemail.com: domain of 3DUAOZQYKCPQoqnajXckkcha.Ykihejqt-iigrWYg.knc@flex--surenb.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3DUAOZQYKCPQoqnajXckkcha.Ykihejqt-iigrWYg.knc@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1695432718; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=4a8RpBYPJhfKwkmpvmaknriYqxX2igZ+T7QoMoyFjTU=; b=mZVWoP1xXIKGsgM3WOsKVs2/OcB3h4PuOh9DibghLJoKSg7BP2PgIytZb2GBnWytNPqOsm SdGxX7CnKV7/CvPFWU4ww/PnKKVf0dWtu7z4mNvWWNwITyK96CcTg9DZdqs1DMt/FcgK9w CIynG7OqEd2u7o8ccsHtx5SNxurnO/U= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1695432718; a=rsa-sha256; cv=none; b=AhvL9894TRnLlRTYF3AAA9Fl6zNKQUM+Q7mH2p5q/vw+4rGbtG3A0C3S3fiU/64TlEK+2C TGDvPu3im54o8OwMlN6GsfueyvHBSHZOfhKnOSqJ3n7wICUAdtJP0ry/6s/whZ7zf57X1D eIWJ1X4QGG8gO5/Z8W8udcZTiSnxMbE= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="axL8/lyQ"; spf=pass (imf15.hostedemail.com: domain of 3DUAOZQYKCPQoqnajXckkcha.Ykihejqt-iigrWYg.knc@flex--surenb.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3DUAOZQYKCPQoqnajXckkcha.Ykihejqt-iigrWYg.knc@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-59c0327b75dso43927917b3.2 for ; Fri, 22 Sep 2023 18:31:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695432717; x=1696037517; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=4a8RpBYPJhfKwkmpvmaknriYqxX2igZ+T7QoMoyFjTU=; b=axL8/lyQXws09aIME+YoEK8IJJnSW2P18QbZW8eXYxprXNPjw/M29YNAdDT8V5WZqS Tp2Uvhae4bDkKYBIgzR/1BT1/pkSIlYtP2T8QM2dUhD2X4Gnub/OOvG6OIhNgj/rkJur Nef0rrup2BW4rioY8PP1lMUvUfC+5tgXroAfIlbdOp0Ma21OAUo9fVupuKOAA/W536ng WruiwwiQJ92p/NmVYsM/5u4axKCw+e32nTgYKkN87dGXRazI3dbi0zm2t2h1IGCpKHwZ QDtRmU6DPGlj+GuwE2FRyL1diBgGiTni4Bm5EKFV5vZgudYkG2SPYRxt5up8a7QgCnYC zaXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695432717; x=1696037517; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=4a8RpBYPJhfKwkmpvmaknriYqxX2igZ+T7QoMoyFjTU=; b=bvcTLEHJADArjpRwcF0GGtsgPQAurE07qSgHdQ+FBY84iZf/t8yp4SdZ8jMN1nJDvx 5AIU4lnIxSQmdizohiujC3CWYrqV40/loTw6gaTWxlocn8Ix1efUPQvVuuv37D2u/fP/ jKloKSNZ6u3p2oLw9FbSYKXyGagj+HITjW2jgRteTTdo47LTVDJFNMzWRp0A74E64P7I 7Sgz+hib80Zii6+L005PhX0gT4d2KwOJPsk3kDwjxTLlwYIrbjVC72JB5mns7IDIwloL uwViwIwP+LBTtth3ktrhtTAkwJMnjgxfsfUx4hUsD/Z1JSQYXxTSyLPo1g6a7ACWpovS 4F2g== X-Gm-Message-State: AOJu0Yy0pdA4+rg6+W6RSYCzA3kLUQ8xQQcmONuCekRzSrfGAlxtMNGd ZOgaCKlzUKGoYjRXBLO5QFdVNGdk6YU= X-Google-Smtp-Source: AGHT+IEmLzkXDOrQPu9xsJ2pKoybAFk1UzoM0oTJ/LxZW4HedeSoFU6M5xhRnp81wdoSvnoXAMq4k0n6EiA= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:3398:6644:ed7f:bec9]) (user=surenb job=sendgmr) by 2002:a81:4006:0:b0:58c:a9b4:a64c with SMTP id l6-20020a814006000000b0058ca9b4a64cmr16251ywn.1.1695432717181; Fri, 22 Sep 2023 18:31:57 -0700 (PDT) Date: Fri, 22 Sep 2023 18:31:45 -0700 In-Reply-To: <20230923013148.1390521-1-surenb@google.com> Mime-Version: 1.0 References: <20230923013148.1390521-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.515.g380fc7ccd1-goog Message-ID: <20230923013148.1390521-3-surenb@google.com> Subject: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: viro@zeniv.linux.org.uk, brauner@kernel.org, shuah@kernel.org, aarcange@redhat.com, lokeshgidra@google.com, peterx@redhat.com, david@redhat.com, hughd@google.com, mhocko@suse.com, axelrasmussen@google.com, rppt@kernel.org, willy@infradead.org, Liam.Howlett@oracle.com, jannh@google.com, zhangpeng362@huawei.com, bgeffon@google.com, kaleshsingh@google.com, ngeoffray@google.com, jdduke@google.com, surenb@google.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, kernel-team@android.com X-Stat-Signature: xku4sq6pihr8zx3bbxpeu41wcjjxsdgo X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 2147FA0036 X-Rspam-User: X-HE-Tag: 1695432717-330930 X-HE-Meta: U2FsdGVkX19LB9ScyJrgtJkqz+De36HiY2NjVD9fQVrmfrYqumK2Ipq61EyxfNkxbmKN1EjyrAUl0dxLPzWTrRxIWlI8z6W+PfPRFI5vmbSbjz021yix8oGsrcb1nv/kPPeWMbmCB609d8ruTNm8g/zX3W3Hz7S/p/UgRVaj8rtQQdMlBgi1KNAc/f36GjcsbqtCw5m5ag1D3yZjEGG+54k+vBnxEoU/0mcYW9mYhlr89rNjrJJOAyarotFUueFO964/N/8lt8y8nTMeu2mf9Z8D/uq4/Ve0HWHcqCqCUKfrHKXqzirQpo30hMh/GbAreZugtRDu3wbNwqxONz1HeHVjkVtchgt2xMqutAix0Jh+X8iUS7Gj3fQrtF6HLjJBL1KGE1MVz8QB+gmccG+yNgAeDP9xoVc903SxNHmzd35ZSJJcy2ubGMEAC9ANDA5WMyug6vuk5gPnjKRtD6T7zDqZbwEUiJlIWBjZSOBuWs/Jl556UOOpZGFhaKDSpoY5UxE5tgB9o/Odh1mEhqRKrsGSAiuSr96JDzG0NUbPCmx1vGUUUJSA7FRSOHgRu3NvH5wLBgy8qfzy/X3iqMRODsLUz9hO55xfK81CvlssXuXtDhXvuRCwOKOuYqpvMNzbdjCO60f5qEiaDTJROxs47/s/maXVneSeSiSsp8Jb/b18IFBp0cTleLOJEYQhBtuG+bQG1UK0UKelpvlMsASQ5I2N7Xe9Zkj+cnnIxD31Ov1qs6o95S/CP9MbsG8YBsngLivU0fZBlfZtO+6R7JeW98lKdiCKOGYLR0R6XO36Sozu6q42trX8CgKs/gojhhXIKqbcur2qL9qJxq2ltJ0CYvfTd5GiAvI4wr1Q6sHhcRSojTHk5Py30jVKao3nbDANgvP3jDm1+zt5Hh06OKFJOAkFjlAW9SvOyBIdcnNYt2GurYMBh6qB9SILu3/1nx3JBpstj51TJdbCIkhQR3+ qxAHvLOR Bq8AzFZLp8m93V4Q0arvFhM7Si/27uvHynRFG9YLAsxxpgypxnCqwutTGgfiIy52x/RbMZ1+zxFKqOmCVAk2jnLa7CIbiv8Nc8CH6dEma7NivGQtXlFHGYwn4MKKyecVn0Ud1VnY24XwXzfIzGRA4uYKVYORcT6Vsa+op/ePQd8cabmdCcsouVsLFdbUJL12uZTULJhCcWnDAbttcfM8xT3qGb/nshs1XNPqnWDeLSkRA6krdauBPRjh4mwg4JnMbIgk81krN2b+kwn+aJu4Vb2yNOX9l12yg30fB6LAESvQeD1GT5l4L7W/uyXDqDzyn2Q3dDM33IXqQ3vB/ziyxfPsPobkUPDtaORgncZJoOFdJrHgEMsAC41bVWlb6lE83wNG0tzr00jodixHhkYiBDfIKKuZhR4PnW82ltj6FL41lv0JshebfiZ4dWlH0eWIxub1XMXGCIbbx1xAapDMPxTwv6mXxdJUKSe1zwqAl9zu14bSzjqfV2ATFqlsz32COoD930skKt4ka9YywGRBopymKJD9FFw5lHAb+g9J+ZXLIaU1otoqDmJSpmVA4h1lteGRgnsVELnDRFWSM0Ar5+qVHKJSBpHeqfYmmA2X7QXVX88GLxONeyPNYYIPRnHn6uAVHkjpGcGRc8+dqwkpixtMSsktjW/e0ImA+NLkmgyvQUA385hhNUPeH/g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrea Arcangeli This implements the uABI of UFFDIO_REMAP. Notably one mode bitflag is also forwarded (and in turn known) by the lowlevel remap_pages method. Signed-off-by: Andrea Arcangeli Signed-off-by: Suren Baghdasaryan --- Changes since v1: - add mmget_not_zero in userfaultfd_remap, per Jann Horn - removed extern from function definitions, per Matthew Wilcox - converted to folios in remap_pages_huge_pmd, per Matthew Wilcox - use PageAnonExclusive in remap_pages_huge_pmd, per David Hildenbrand - handle pgtable transfers between MMs, per Jann Horn - ignore concurrent A/D pte bit changes, per Jann Horn - split functions into smaller units, per David Hildenbrand - test for folio_test_large in remap_anon_pte, per Matthew Wilcox - use pte_swp_exclusive for swapcount check, per David Hildenbrand - eliminated use of mmu_notifier_invalidate_range_start_nonblock, per Jann Horn - simplified THP alignment checks, per Jann Horn - refactored the loop inside remap_pages, per Jann Horn - additional clarifying comments, per Jann Horn fs/userfaultfd.c | 63 ++++ include/linux/rmap.h | 5 + include/linux/userfaultfd_k.h | 12 + include/uapi/linux/userfaultfd.h | 22 ++ mm/huge_memory.c | 130 +++++++ mm/khugepaged.c | 3 + mm/userfaultfd.c | 590 +++++++++++++++++++++++++++++++ 7 files changed, 825 insertions(+) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 56eaae9dac1a..5b6bb20f4518 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -2027,6 +2027,66 @@ static inline unsigned int uffd_ctx_features(__u64 user_features) return (unsigned int)user_features | UFFD_FEATURE_INITIALIZED; } +static int userfaultfd_remap(struct userfaultfd_ctx *ctx, + unsigned long arg) +{ + __s64 ret; + struct uffdio_remap uffdio_remap; + struct uffdio_remap __user *user_uffdio_remap; + struct userfaultfd_wake_range range; + + user_uffdio_remap = (struct uffdio_remap __user *) arg; + + ret = -EAGAIN; + if (atomic_read(&ctx->mmap_changing)) + goto out; + + ret = -EFAULT; + if (copy_from_user(&uffdio_remap, user_uffdio_remap, + /* don't copy "remap" last field */ + sizeof(uffdio_remap)-sizeof(__s64))) + goto out; + + ret = validate_range(ctx->mm, uffdio_remap.dst, uffdio_remap.len); + if (ret) + goto out; + + ret = validate_range(current->mm, uffdio_remap.src, uffdio_remap.len); + if (ret) + goto out; + + ret = -EINVAL; + if (uffdio_remap.mode & ~(UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES| + UFFDIO_REMAP_MODE_DONTWAKE)) + goto out; + + if (mmget_not_zero(ctx->mm)) { + ret = remap_pages(ctx->mm, current->mm, + uffdio_remap.dst, uffdio_remap.src, + uffdio_remap.len, uffdio_remap.mode); + mmput(ctx->mm); + } else { + return -ESRCH; + } + + if (unlikely(put_user(ret, &user_uffdio_remap->remap))) + return -EFAULT; + if (ret < 0) + goto out; + + /* len == 0 would wake all */ + BUG_ON(!ret); + range.len = ret; + if (!(uffdio_remap.mode & UFFDIO_REMAP_MODE_DONTWAKE)) { + range.start = uffdio_remap.dst; + wake_userfault(ctx, &range); + } + ret = range.len == uffdio_remap.len ? 0 : -EAGAIN; + +out: + return ret; +} + /* * userland asks for a certain API version and we return which bits * and ioctl commands are implemented in this kernel for such API @@ -2113,6 +2173,9 @@ static long userfaultfd_ioctl(struct file *file, unsigned cmd, case UFFDIO_ZEROPAGE: ret = userfaultfd_zeropage(ctx, arg); break; + case UFFDIO_REMAP: + ret = userfaultfd_remap(ctx, arg); + break; case UFFDIO_WRITEPROTECT: ret = userfaultfd_writeprotect(ctx, arg); break; diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 51cc21ebb568..614c4b439907 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -121,6 +121,11 @@ static inline void anon_vma_lock_write(struct anon_vma *anon_vma) down_write(&anon_vma->root->rwsem); } +static inline int anon_vma_trylock_write(struct anon_vma *anon_vma) +{ + return down_write_trylock(&anon_vma->root->rwsem); +} + static inline void anon_vma_unlock_write(struct anon_vma *anon_vma) { up_write(&anon_vma->root->rwsem); diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h index ac8c6854097c..9ea2c43ad4b7 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -93,6 +93,18 @@ extern int mwriteprotect_range(struct mm_struct *dst_mm, extern long uffd_wp_range(struct vm_area_struct *vma, unsigned long start, unsigned long len, bool enable_wp); +/* remap_pages */ +void double_pt_lock(spinlock_t *ptl1, spinlock_t *ptl2); +void double_pt_unlock(spinlock_t *ptl1, spinlock_t *ptl2); +ssize_t remap_pages(struct mm_struct *dst_mm, struct mm_struct *src_mm, + unsigned long dst_start, unsigned long src_start, + unsigned long len, __u64 flags); +int remap_pages_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, + pmd_t *dst_pmd, pmd_t *src_pmd, pmd_t dst_pmdval, + struct vm_area_struct *dst_vma, + struct vm_area_struct *src_vma, + unsigned long dst_addr, unsigned long src_addr); + /* mm helpers */ static inline bool is_mergeable_vm_userfaultfd_ctx(struct vm_area_struct *vma, struct vm_userfaultfd_ctx vm_ctx) diff --git a/include/uapi/linux/userfaultfd.h b/include/uapi/linux/userfaultfd.h index 62151706c5a3..22d1c43e39f9 100644 --- a/include/uapi/linux/userfaultfd.h +++ b/include/uapi/linux/userfaultfd.h @@ -49,6 +49,7 @@ ((__u64)1 << _UFFDIO_WAKE | \ (__u64)1 << _UFFDIO_COPY | \ (__u64)1 << _UFFDIO_ZEROPAGE | \ + (__u64)1 << _UFFDIO_REMAP | \ (__u64)1 << _UFFDIO_WRITEPROTECT | \ (__u64)1 << _UFFDIO_CONTINUE | \ (__u64)1 << _UFFDIO_POISON) @@ -72,6 +73,7 @@ #define _UFFDIO_WAKE (0x02) #define _UFFDIO_COPY (0x03) #define _UFFDIO_ZEROPAGE (0x04) +#define _UFFDIO_REMAP (0x05) #define _UFFDIO_WRITEPROTECT (0x06) #define _UFFDIO_CONTINUE (0x07) #define _UFFDIO_POISON (0x08) @@ -91,6 +93,8 @@ struct uffdio_copy) #define UFFDIO_ZEROPAGE _IOWR(UFFDIO, _UFFDIO_ZEROPAGE, \ struct uffdio_zeropage) +#define UFFDIO_REMAP _IOWR(UFFDIO, _UFFDIO_REMAP, \ + struct uffdio_remap) #define UFFDIO_WRITEPROTECT _IOWR(UFFDIO, _UFFDIO_WRITEPROTECT, \ struct uffdio_writeprotect) #define UFFDIO_CONTINUE _IOWR(UFFDIO, _UFFDIO_CONTINUE, \ @@ -340,6 +344,24 @@ struct uffdio_poison { __s64 updated; }; +struct uffdio_remap { + __u64 dst; + __u64 src; + __u64 len; + /* + * Especially if used to atomically remove memory from the + * address space the wake on the dst range is not needed. + */ +#define UFFDIO_REMAP_MODE_DONTWAKE ((__u64)1<<0) +#define UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES ((__u64)1<<1) + __u64 mode; + /* + * "remap" is written by the ioctl and must be at the end: the + * copy_from_user will not read the last 8 bytes. + */ + __s64 remap; +}; + /* * Flags for the userfaultfd(2) system call itself. */ diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 064fbd90822b..a8c898df36db 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1932,6 +1932,136 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, return ret; } +#ifdef CONFIG_USERFAULTFD +/* + * The PT lock for src_pmd and the mmap_lock for reading are held by + * the caller, but it must return after releasing the + * page_table_lock. We're guaranteed the src_pmd is a pmd_trans_huge + * until the PT lock of the src_pmd is released. Just move the page + * from src_pmd to dst_pmd if possible. Return zero if succeeded in + * moving the page, -EAGAIN if it needs to be repeated by the caller, + * or other errors in case of failure. + */ +int remap_pages_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, + pmd_t *dst_pmd, pmd_t *src_pmd, pmd_t dst_pmdval, + struct vm_area_struct *dst_vma, + struct vm_area_struct *src_vma, + unsigned long dst_addr, unsigned long src_addr) +{ + pmd_t _dst_pmd, src_pmdval; + struct page *src_page; + struct folio *src_folio; + struct anon_vma *src_anon_vma, *dst_anon_vma; + spinlock_t *src_ptl, *dst_ptl; + pgtable_t src_pgtable, dst_pgtable; + struct mmu_notifier_range range; + int err = 0; + + src_pmdval = *src_pmd; + src_ptl = pmd_lockptr(src_mm, src_pmd); + + BUG_ON(!spin_is_locked(src_ptl)); + mmap_assert_locked(src_mm); + mmap_assert_locked(dst_mm); + + BUG_ON(!pmd_trans_huge(src_pmdval)); + BUG_ON(!pmd_none(dst_pmdval)); + BUG_ON(src_addr & ~HPAGE_PMD_MASK); + BUG_ON(dst_addr & ~HPAGE_PMD_MASK); + + src_page = pmd_page(src_pmdval); + if (unlikely(!PageAnonExclusive(src_page))) { + spin_unlock(src_ptl); + return -EBUSY; + } + + src_folio = page_folio(src_page); + folio_get(src_folio); + spin_unlock(src_ptl); + + /* preallocate dst_pgtable if needed */ + if (dst_mm != src_mm) { + dst_pgtable = pte_alloc_one(dst_mm); + if (unlikely(!dst_pgtable)) { + err = -ENOMEM; + goto put_folio; + } + } else { + dst_pgtable = NULL; + } + + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, src_mm, src_addr, + src_addr + HPAGE_PMD_SIZE); + mmu_notifier_invalidate_range_start(&range); + + /* block all concurrent rmap walks */ + folio_lock(src_folio); + + /* + * split_huge_page walks the anon_vma chain without the page + * lock. Serialize against it with the anon_vma lock, the page + * lock is not enough. + */ + src_anon_vma = folio_get_anon_vma(src_folio); + if (!src_anon_vma) { + err = -EAGAIN; + goto unlock_folio; + } + anon_vma_lock_write(src_anon_vma); + + dst_ptl = pmd_lockptr(dst_mm, dst_pmd); + double_pt_lock(src_ptl, dst_ptl); + if (unlikely(!pmd_same(*src_pmd, src_pmdval) || + !pmd_same(*dst_pmd, dst_pmdval) || + folio_mapcount(src_folio) != 1)) { + double_pt_unlock(src_ptl, dst_ptl); + err = -EAGAIN; + goto put_anon_vma; + } + + BUG_ON(!folio_test_head(src_folio)); + BUG_ON(!folio_test_anon(src_folio)); + + dst_anon_vma = (void *)dst_vma->anon_vma + PAGE_MAPPING_ANON; + WRITE_ONCE(src_folio->mapping, (struct address_space *) dst_anon_vma); + WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr)); + + src_pmdval = pmdp_huge_clear_flush(src_vma, src_addr, src_pmd); + _dst_pmd = mk_huge_pmd(&src_folio->page, dst_vma->vm_page_prot); + _dst_pmd = maybe_pmd_mkwrite(pmd_mkdirty(_dst_pmd), dst_vma); + set_pmd_at(dst_mm, dst_addr, dst_pmd, _dst_pmd); + + src_pgtable = pgtable_trans_huge_withdraw(src_mm, src_pmd); + if (dst_pgtable) { + pgtable_trans_huge_deposit(dst_mm, dst_pmd, dst_pgtable); + pte_free(src_mm, src_pgtable); + dst_pgtable = NULL; + + mm_inc_nr_ptes(dst_mm); + mm_dec_nr_ptes(src_mm); + add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR); + add_mm_counter(src_mm, MM_ANONPAGES, -HPAGE_PMD_NR); + } else { + pgtable_trans_huge_deposit(dst_mm, dst_pmd, src_pgtable); + } + double_pt_unlock(src_ptl, dst_ptl); + +put_anon_vma: + anon_vma_unlock_write(src_anon_vma); + put_anon_vma(src_anon_vma); +unlock_folio: + /* unblock rmap walks */ + folio_unlock(src_folio); + mmu_notifier_invalidate_range_end(&range); + if (dst_pgtable) + pte_free(dst_mm, dst_pgtable); +put_folio: + folio_put(src_folio); + + return err; +} +#endif /* CONFIG_USERFAULTFD */ + /* * Returns page table lock pointer if a given pmd maps a thp, NULL otherwise. * diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 88433cc25d8a..af23248b3551 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1135,6 +1135,9 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, * Prevent all access to pagetables with the exception of * gup_fast later handled by the ptep_clear_flush and the VM * handled by the anon_vma lock + PG_lock. + * + * UFFDIO_REMAP is prevented to race as well thanks to the + * mmap_lock. */ mmap_write_lock(mm); result = hugepage_vma_revalidate(mm, address, true, &vma, cc); diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 96d9eae5c7cc..5ce5e364373c 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -842,3 +842,593 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, mmap_read_unlock(dst_mm); return err; } + + +void double_pt_lock(spinlock_t *ptl1, + spinlock_t *ptl2) + __acquires(ptl1) + __acquires(ptl2) +{ + spinlock_t *ptl_tmp; + + if (ptl1 > ptl2) { + /* exchange ptl1 and ptl2 */ + ptl_tmp = ptl1; + ptl1 = ptl2; + ptl2 = ptl_tmp; + } + /* lock in virtual address order to avoid lock inversion */ + spin_lock(ptl1); + if (ptl1 != ptl2) + spin_lock_nested(ptl2, SINGLE_DEPTH_NESTING); + else + __acquire(ptl2); +} + +void double_pt_unlock(spinlock_t *ptl1, + spinlock_t *ptl2) + __releases(ptl1) + __releases(ptl2) +{ + spin_unlock(ptl1); + if (ptl1 != ptl2) + spin_unlock(ptl2); + else + __release(ptl2); +} + + +static int remap_anon_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, + struct vm_area_struct *dst_vma, + struct vm_area_struct *src_vma, + unsigned long dst_addr, unsigned long src_addr, + pte_t *dst_pte, pte_t *src_pte, + pte_t orig_dst_pte, pte_t orig_src_pte, + spinlock_t *dst_ptl, spinlock_t *src_ptl, + struct folio *src_folio) +{ + struct anon_vma *dst_anon_vma; + + double_pt_lock(dst_ptl, src_ptl); + + if (!pte_same(*src_pte, orig_src_pte) || + !pte_same(*dst_pte, orig_dst_pte) || + folio_test_large(src_folio) || + folio_estimated_sharers(src_folio) != 1) { + double_pt_unlock(dst_ptl, src_ptl); + return -EAGAIN; + } + + BUG_ON(!folio_test_anon(src_folio)); + + dst_anon_vma = (void *)dst_vma->anon_vma + PAGE_MAPPING_ANON; + WRITE_ONCE(src_folio->mapping, + (struct address_space *) dst_anon_vma); + WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, + dst_addr)); + + orig_src_pte = ptep_clear_flush(src_vma, src_addr, src_pte); + orig_dst_pte = mk_pte(&src_folio->page, dst_vma->vm_page_prot); + orig_dst_pte = maybe_mkwrite(pte_mkdirty(orig_dst_pte), + dst_vma); + + set_pte_at(dst_mm, dst_addr, dst_pte, orig_dst_pte); + + if (dst_mm != src_mm) { + inc_mm_counter(dst_mm, MM_ANONPAGES); + dec_mm_counter(src_mm, MM_ANONPAGES); + } + + double_pt_unlock(dst_ptl, src_ptl); + + return 0; +} + +static int remap_swap_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, + unsigned long dst_addr, unsigned long src_addr, + pte_t *dst_pte, pte_t *src_pte, + pte_t orig_dst_pte, pte_t orig_src_pte, + spinlock_t *dst_ptl, spinlock_t *src_ptl) +{ + if (!pte_swp_exclusive(orig_src_pte)) + return -EBUSY; + + double_pt_lock(dst_ptl, src_ptl); + + if (!pte_same(*src_pte, orig_src_pte) || + !pte_same(*dst_pte, orig_dst_pte)) { + double_pt_unlock(dst_ptl, src_ptl); + return -EAGAIN; + } + + orig_src_pte = ptep_get_and_clear(src_mm, src_addr, src_pte); + set_pte_at(dst_mm, dst_addr, dst_pte, orig_src_pte); + + if (dst_mm != src_mm) { + inc_mm_counter(dst_mm, MM_ANONPAGES); + dec_mm_counter(src_mm, MM_ANONPAGES); + } + + double_pt_unlock(dst_ptl, src_ptl); + + return 0; +} + +/* + * The mmap_lock for reading is held by the caller. Just move the page + * from src_pmd to dst_pmd if possible, and return true if succeeded + * in moving the page. + */ +static int remap_pages_pte(struct mm_struct *dst_mm, + struct mm_struct *src_mm, + pmd_t *dst_pmd, + pmd_t *src_pmd, + struct vm_area_struct *dst_vma, + struct vm_area_struct *src_vma, + unsigned long dst_addr, + unsigned long src_addr, + __u64 mode) +{ + swp_entry_t entry; + pte_t orig_src_pte, orig_dst_pte; + spinlock_t *src_ptl, *dst_ptl; + pte_t *src_pte = NULL; + pte_t *dst_pte = NULL; + + struct folio *src_folio = NULL; + struct anon_vma *src_anon_vma = NULL; + struct mmu_notifier_range range; + int err = 0; + + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, src_mm, + src_addr, src_addr + PAGE_SIZE); + mmu_notifier_invalidate_range_start(&range); +retry: + dst_pte = pte_offset_map_nolock(dst_mm, dst_pmd, dst_addr, &dst_ptl); + + /* If an huge pmd materialized from under us fail */ + if (unlikely(!dst_pte)) { + err = -EFAULT; + goto out; + } + + src_pte = pte_offset_map_nolock(src_mm, src_pmd, src_addr, &src_ptl); + + /* + * We held the mmap_lock for reading so MADV_DONTNEED + * can zap transparent huge pages under us, or the + * transparent huge page fault can establish new + * transparent huge pages under us. + */ + if (unlikely(!src_pte)) { + err = -EFAULT; + goto out; + } + + BUG_ON(pmd_none(*dst_pmd)); + BUG_ON(pmd_none(*src_pmd)); + BUG_ON(pmd_trans_huge(*dst_pmd)); + BUG_ON(pmd_trans_huge(*src_pmd)); + + spin_lock(dst_ptl); + orig_dst_pte = *dst_pte; + spin_unlock(dst_ptl); + if (!pte_none(orig_dst_pte)) { + err = -EEXIST; + goto out; + } + + spin_lock(src_ptl); + orig_src_pte = *src_pte; + spin_unlock(src_ptl); + if (pte_none(orig_src_pte)) { + if (!(mode & UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES)) + err = -ENOENT; + else /* nothing to do to remap a hole */ + err = 0; + goto out; + } + + if (pte_present(orig_src_pte)) { + /* + * Pin and lock both source folio and anon_vma. Since we are in + * RCU read section, we can't block, so on contention have to + * unmap the ptes, obtain the lock and retry. + */ + if (!src_folio) { + struct folio *folio; + + /* + * Pin the page while holding the lock to be sure the + * page isn't freed under us + */ + spin_lock(src_ptl); + if (!pte_same(orig_src_pte, *src_pte)) { + spin_unlock(src_ptl); + err = -EAGAIN; + goto out; + } + + folio = vm_normal_folio(src_vma, src_addr, orig_src_pte); + if (!folio || !folio_test_anon(folio) || + folio_test_large(folio) || + folio_estimated_sharers(folio) != 1) { + spin_unlock(src_ptl); + err = -EBUSY; + goto out; + } + + folio_get(folio); + src_folio = folio; + spin_unlock(src_ptl); + + /* block all concurrent rmap walks */ + if (!folio_trylock(src_folio)) { + pte_unmap(&orig_src_pte); + pte_unmap(&orig_dst_pte); + src_pte = dst_pte = NULL; + /* now we can block and wait */ + folio_lock(src_folio); + goto retry; + } + } + + if (!src_anon_vma) { + /* + * folio_referenced walks the anon_vma chain + * without the folio lock. Serialize against it with + * the anon_vma lock, the folio lock is not enough. + */ + src_anon_vma = folio_get_anon_vma(src_folio); + if (!src_anon_vma) { + /* page was unmapped from under us */ + err = -EAGAIN; + goto out; + } + if (!anon_vma_trylock_write(src_anon_vma)) { + pte_unmap(&orig_src_pte); + pte_unmap(&orig_dst_pte); + src_pte = dst_pte = NULL; + /* now we can block and wait */ + anon_vma_lock_write(src_anon_vma); + goto retry; + } + } + + err = remap_anon_pte(dst_mm, src_mm, dst_vma, src_vma, + dst_addr, src_addr, dst_pte, src_pte, + orig_dst_pte, orig_src_pte, + dst_ptl, src_ptl, src_folio); + } else { + entry = pte_to_swp_entry(orig_src_pte); + if (non_swap_entry(entry)) { + if (is_migration_entry(entry)) { + pte_unmap(&orig_src_pte); + pte_unmap(&orig_dst_pte); + src_pte = dst_pte = NULL; + migration_entry_wait(src_mm, src_pmd, + src_addr); + err = -EAGAIN; + } else + err = -EFAULT; + goto out; + } + + err = remap_swap_pte(dst_mm, src_mm, dst_addr, src_addr, + dst_pte, src_pte, + orig_dst_pte, orig_src_pte, + dst_ptl, src_ptl); + } + +out: + if (src_anon_vma) { + anon_vma_unlock_write(src_anon_vma); + put_anon_vma(src_anon_vma); + } + if (src_folio) { + folio_unlock(src_folio); + folio_put(src_folio); + } + if (dst_pte) + pte_unmap(dst_pte); + if (src_pte) + pte_unmap(src_pte); + mmu_notifier_invalidate_range_end(&range); + + return err; +} + +static int validate_remap_areas(struct vm_area_struct *src_vma, + struct vm_area_struct *dst_vma) +{ + /* Only allow remapping if both have the same access and protection */ + if ((src_vma->vm_flags & VM_ACCESS_FLAGS) != (dst_vma->vm_flags & VM_ACCESS_FLAGS) || + pgprot_val(src_vma->vm_page_prot) != pgprot_val(dst_vma->vm_page_prot)) + return -EINVAL; + + /* Only allow remapping if both are mlocked or both aren't */ + if ((src_vma->vm_flags & VM_LOCKED) != (dst_vma->vm_flags & VM_LOCKED)) + return -EINVAL; + + /* + * Be strict and only allow remap_pages if either the src or + * dst range is registered in the userfaultfd to prevent + * userland errors going unnoticed. As far as the VM + * consistency is concerned, it would be perfectly safe to + * remove this check, but there's no useful usage for + * remap_pages ouside of userfaultfd registered ranges. This + * is after all why it is an ioctl belonging to the + * userfaultfd and not a syscall. + * + * Allow both vmas to be registered in the userfaultfd, just + * in case somebody finds a way to make such a case useful. + * Normally only one of the two vmas would be registered in + * the userfaultfd. + */ + if (!dst_vma->vm_userfaultfd_ctx.ctx && + !src_vma->vm_userfaultfd_ctx.ctx) + return -EINVAL; + + /* + * FIXME: only allow remapping across anonymous vmas, + * tmpfs should be added. + */ + if (!vma_is_anonymous(src_vma) || !vma_is_anonymous(dst_vma)) + return -EINVAL; + + /* + * Ensure the dst_vma has a anon_vma or this page + * would get a NULL anon_vma when moved in the + * dst_vma. + */ + if (unlikely(anon_vma_prepare(dst_vma))) + return -ENOMEM; + + return 0; +} + +/** + * remap_pages - remap arbitrary anonymous pages of an existing vma + * @dst_start: start of the destination virtual memory range + * @src_start: start of the source virtual memory range + * @len: length of the virtual memory range + * + * remap_pages() remaps arbitrary anonymous pages atomically in zero + * copy. It only works on non shared anonymous pages because those can + * be relocated without generating non linear anon_vmas in the rmap + * code. + * + * It provides a zero copy mechanism to handle userspace page faults. + * The source vma pages should have mapcount == 1, which can be + * enforced by using madvise(MADV_DONTFORK) on src vma. + * + * The thread receiving the page during the userland page fault + * will receive the faulting page in the source vma through the network, + * storage or any other I/O device (MADV_DONTFORK in the source vma + * avoids remap_pages() to fail with -EBUSY if the process forks before + * remap_pages() is called), then it will call remap_pages() to map the + * page in the faulting address in the destination vma. + * + * This userfaultfd command works purely via pagetables, so it's the + * most efficient way to move physical non shared anonymous pages + * across different virtual addresses. Unlike mremap()/mmap()/munmap() + * it does not create any new vmas. The mapping in the destination + * address is atomic. + * + * It only works if the vma protection bits are identical from the + * source and destination vma. + * + * It can remap non shared anonymous pages within the same vma too. + * + * If the source virtual memory range has any unmapped holes, or if + * the destination virtual memory range is not a whole unmapped hole, + * remap_pages() will fail respectively with -ENOENT or -EEXIST. This + * provides a very strict behavior to avoid any chance of memory + * corruption going unnoticed if there are userland race conditions. + * Only one thread should resolve the userland page fault at any given + * time for any given faulting address. This means that if two threads + * try to both call remap_pages() on the same destination address at the + * same time, the second thread will get an explicit error from this + * command. + * + * The command retval will return "len" is successful. The command + * however can be interrupted by fatal signals or errors. If + * interrupted it will return the number of bytes successfully + * remapped before the interruption if any, or the negative error if + * none. It will never return zero. Either it will return an error or + * an amount of bytes successfully moved. If the retval reports a + * "short" remap, the remap_pages() command should be repeated by + * userland with src+retval, dst+reval, len-retval if it wants to know + * about the error that interrupted it. + * + * The UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES flag can be specified to + * prevent -ENOENT errors to materialize if there are holes in the + * source virtual range that is being remapped. The holes will be + * accounted as successfully remapped in the retval of the + * command. This is mostly useful to remap hugepage naturally aligned + * virtual regions without knowing if there are transparent hugepage + * in the regions or not, but preventing the risk of having to split + * the hugepmd during the remap. + * + * If there's any rmap walk that is taking the anon_vma locks without + * first obtaining the folio lock (for example split_huge_page and + * folio_referenced), they will have to verify if the folio->mapping + * has changed after taking the anon_vma lock. If it changed they + * should release the lock and retry obtaining a new anon_vma, because + * it means the anon_vma was changed by remap_pages() before the lock + * could be obtained. This is the only additional complexity added to + * the rmap code to provide this anonymous page remapping functionality. + */ +ssize_t remap_pages(struct mm_struct *dst_mm, struct mm_struct *src_mm, + unsigned long dst_start, unsigned long src_start, + unsigned long len, __u64 mode) +{ + struct vm_area_struct *src_vma, *dst_vma; + unsigned long src_addr, dst_addr; + pmd_t *src_pmd, *dst_pmd; + long err = -EINVAL; + ssize_t moved = 0; + + /* + * Sanitize the command parameters: + */ + BUG_ON(src_start & ~PAGE_MASK); + BUG_ON(dst_start & ~PAGE_MASK); + BUG_ON(len & ~PAGE_MASK); + + /* Does the address range wrap, or is the span zero-sized? */ + BUG_ON(src_start + len <= src_start); + BUG_ON(dst_start + len <= dst_start); + + /* + * Because these are read sempahores there's no risk of lock + * inversion. + */ + mmap_read_lock(dst_mm); + if (dst_mm != src_mm) + mmap_read_lock(src_mm); + + /* + * Make sure the vma is not shared, that the src and dst remap + * ranges are both valid and fully within a single existing + * vma. + */ + src_vma = find_vma(src_mm, src_start); + if (!src_vma || (src_vma->vm_flags & VM_SHARED)) + goto out; + if (src_start < src_vma->vm_start || + src_start + len > src_vma->vm_end) + goto out; + + dst_vma = find_vma(dst_mm, dst_start); + if (!dst_vma || (dst_vma->vm_flags & VM_SHARED)) + goto out; + if (dst_start < dst_vma->vm_start || + dst_start + len > dst_vma->vm_end) + goto out; + + err = validate_remap_areas(src_vma, dst_vma); + if (err) + goto out; + + for (src_addr = src_start, dst_addr = dst_start; + src_addr < src_start + len;) { + spinlock_t *ptl; + pmd_t dst_pmdval; + unsigned long step_size; + + BUG_ON(dst_addr >= dst_start + len); + /* + * Below works because anonymous area would not have a + * transparent huge PUD. If file-backed support is added, + * that case would need to be handled here. + */ + src_pmd = mm_find_pmd(src_mm, src_addr); + if (unlikely(!src_pmd)) { + if (!(mode & UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES)) { + err = -ENOENT; + break; + } + src_pmd = mm_alloc_pmd(src_mm, src_addr); + if (unlikely(!src_pmd)) { + err = -ENOMEM; + break; + } + } + dst_pmd = mm_alloc_pmd(dst_mm, dst_addr); + if (unlikely(!dst_pmd)) { + err = -ENOMEM; + break; + } + + dst_pmdval = pmdp_get_lockless(dst_pmd); + /* + * If the dst_pmd is mapped as THP don't override it and just + * be strict. If dst_pmd changes into TPH after this check, the + * remap_pages_huge_pmd() will detect the change and retry + * while remap_pages_pte() will detect the change and fail. + */ + if (unlikely(pmd_trans_huge(dst_pmdval))) { + err = -EEXIST; + break; + } + + ptl = pmd_trans_huge_lock(src_pmd, src_vma); + if (ptl && !pmd_trans_huge(*src_pmd)) { + spin_unlock(ptl); + ptl = NULL; + } + + if (ptl) { + /* + * Check if we can move the pmd without + * splitting it. First check the address + * alignment to be the same in src/dst. These + * checks don't actually need the PT lock but + * it's good to do it here to optimize this + * block away at build time if + * CONFIG_TRANSPARENT_HUGEPAGE is not set. + */ + if ((src_addr & ~HPAGE_PMD_MASK) || (dst_addr & ~HPAGE_PMD_MASK) || + src_start + len - src_addr < HPAGE_PMD_SIZE || !pmd_none(dst_pmdval)) { + spin_unlock(ptl); + split_huge_pmd(src_vma, src_pmd, src_addr); + continue; + } + + err = remap_pages_huge_pmd(dst_mm, src_mm, + dst_pmd, src_pmd, + dst_pmdval, + dst_vma, src_vma, + dst_addr, src_addr); + step_size = HPAGE_PMD_SIZE; + } else { + if (pmd_none(*src_pmd)) { + if (!(mode & UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES)) { + err = -ENOENT; + break; + } + if (unlikely(__pte_alloc(src_mm, src_pmd))) { + err = -ENOMEM; + break; + } + } + + if (unlikely(pte_alloc(dst_mm, dst_pmd))) { + err = -ENOMEM; + break; + } + + err = remap_pages_pte(dst_mm, src_mm, + dst_pmd, src_pmd, + dst_vma, src_vma, + dst_addr, src_addr, + mode); + step_size = PAGE_SIZE; + } + + cond_resched(); + + if (!err) { + dst_addr += step_size; + src_addr += step_size; + moved += step_size; + } + + if ((!err || err == -EAGAIN) && + fatal_signal_pending(current)) + err = -EINTR; + + if (err && err != -EAGAIN) + break; + } + +out: + mmap_read_unlock(dst_mm); + if (dst_mm != src_mm) + mmap_read_unlock(src_mm); + BUG_ON(moved < 0); + BUG_ON(err > 0); + BUG_ON(!moved && !err); + return moved ? moved : err; +} From patchwork Sat Sep 23 01:31:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13396498 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 95759CE7A81 for ; Sat, 23 Sep 2023 01:32:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 129706B02C2; Fri, 22 Sep 2023 21:32:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0DB4B6B02C6; Fri, 22 Sep 2023 21:32:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E962F6B02C7; Fri, 22 Sep 2023 21:32:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id D6FFD6B02C2 for ; Fri, 22 Sep 2023 21:32:03 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id B78AEC032D for ; Sat, 23 Sep 2023 01:32:03 +0000 (UTC) X-FDA: 81266136126.28.8EF5BDC Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf30.hostedemail.com (Postfix) with ESMTP id DE6F680011 for ; Sat, 23 Sep 2023 01:32:00 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=2WeWnYOI; spf=pass (imf30.hostedemail.com: domain of 3D0AOZQYKCPYqspclZemmejc.amkjglsv-kkitYai.mpe@flex--surenb.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3D0AOZQYKCPYqspclZemmejc.amkjglsv-kkitYai.mpe@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1695432720; a=rsa-sha256; cv=none; b=WE1Fy7H2iKVq75SoAoZ+KnpNj4JdKPjR3HCBERwKBCUVGEeAotmFyYQRKxEto6OmKc92YE +JSKGwLUJTGAQHLoD9WjIY/b//wUqYslIYKlAC3Uam4ev9+3cKK+uX+4mo7Drsmt7P8UCo n9qHimu7aKtKagnpc0qkuOClPRP9DrE= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=2WeWnYOI; spf=pass (imf30.hostedemail.com: domain of 3D0AOZQYKCPYqspclZemmejc.amkjglsv-kkitYai.mpe@flex--surenb.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3D0AOZQYKCPYqspclZemmejc.amkjglsv-kkitYai.mpe@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1695432720; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=6sBdVMfMRjf51/A4FAmBdDBudgEJD7XIV0S+Rn1ebf8=; b=z1gpW7pU6Qg1KquCT4rhJX2FHoXOUJI7xzpSYcfQQFEMM8qqRF38/p400wFjtkoEbLuq2D eCyJ2frlUtqdCi7zEJB5M43/QobSjU+xZPBkLdjfW9U1pSuL1zPzVhxsBB0dmnq1QtoHjD qw8Wlg/eBrTMpFVd4CVcTf01iSSjK2o= Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-d865f1447a2so602744276.2 for ; Fri, 22 Sep 2023 18:32:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695432720; x=1696037520; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=6sBdVMfMRjf51/A4FAmBdDBudgEJD7XIV0S+Rn1ebf8=; b=2WeWnYOIrmaLvFWh+kNQdA1eMLD4IINAKL8Xmao6pN6r6GxJ96d/H/y60i8j5eHN18 y+MtkAq54vuJ3/xpysdQbmVypBV+TcaO70XCfLZn+OXl+GKcqi0ULv3vz6ERyumDVseE TqIhLMqizg9unfmA9lPqMy0m4u+t7uGj/T1wppOq+FcTTWvKy7sODHGH4z/PmQ5CzcyD 9w3QKWEaPMQcpNXGOIJA8CJktqegyGLDT3Hwo/+HIXQ5etchJjZAt5ON0RSFS6oGDdYs LnSdXsGVIZt/cOjgzRWHZEkt+XLYtaRnyfcBGVfzN4TtNywDqz6PZQy+OYNET+CLYouV nSBw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695432720; x=1696037520; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=6sBdVMfMRjf51/A4FAmBdDBudgEJD7XIV0S+Rn1ebf8=; b=CPjQuWp2xtjrK3yAOLNboWUPzOYUMXBmn4IA1cUbAh7WzN8/OoeGBOWIUAtr8GiJHp pjOX9vh7PEDoqIkWG9NX7dnTpiQpL9nZvzqFmE/6NFXDqPwzykFIEiuZKOMHX3fkgEF8 IfdFDPlsFOUuDogk2iEJxLzaS3oyp3rJLmoXpdK8zhzhJftRxdx9X/HxEwOZTZz6BUvr +Rt6MGsHFw705HsYKMuBA9Xn2jnuUU/Y1EnB6n0fRbkj8bjZ5FtIOp8ASboTuRXpa7wm eCKr5PSHD1tn13Va/H5bW4SHoX84RN6jS4Vp6JicwmG5zue/BWkZsUA/qJD8F8+33Q36 J3pw== X-Gm-Message-State: AOJu0YzrywzhtF9J5hKF4zf9pKq9atgYRfY7mPbfXzKtdx5au80MHSGl yg9o1IO+57V/B5KEqIVk200UX8aBb9Q= X-Google-Smtp-Source: AGHT+IE2yWzcwua2aqDzlLl0Y+Ymi3r6rJDilflyRWbYZnXXI88FhB4QNhRbeG/BF6XKbV1cknxCSUOXvog= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:3398:6644:ed7f:bec9]) (user=surenb job=sendgmr) by 2002:a25:d09:0:b0:d7b:9185:e23d with SMTP id 9-20020a250d09000000b00d7b9185e23dmr9469ybn.6.1695432719939; Fri, 22 Sep 2023 18:31:59 -0700 (PDT) Date: Fri, 22 Sep 2023 18:31:46 -0700 In-Reply-To: <20230923013148.1390521-1-surenb@google.com> Mime-Version: 1.0 References: <20230923013148.1390521-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.515.g380fc7ccd1-goog Message-ID: <20230923013148.1390521-4-surenb@google.com> Subject: [PATCH v2 3/3] selftests/mm: add UFFDIO_REMAP ioctl test From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: viro@zeniv.linux.org.uk, brauner@kernel.org, shuah@kernel.org, aarcange@redhat.com, lokeshgidra@google.com, peterx@redhat.com, david@redhat.com, hughd@google.com, mhocko@suse.com, axelrasmussen@google.com, rppt@kernel.org, willy@infradead.org, Liam.Howlett@oracle.com, jannh@google.com, zhangpeng362@huawei.com, bgeffon@google.com, kaleshsingh@google.com, ngeoffray@google.com, jdduke@google.com, surenb@google.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, kernel-team@android.com X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: DE6F680011 X-Stat-Signature: fagowxx9o3k9dekexhnnt8keerxza3w7 X-Rspam-User: X-HE-Tag: 1695432720-532522 X-HE-Meta: U2FsdGVkX1+kymUWDkvlrdaLzaQR3D9sZRu/fS7vrbUX3z60GdMHFvD8Loc+y/U8HUuMTNpWLltZu5RFEL9TMhUZyVEOj9BqzXbQqpcspbtRYXIrSKyNDBf30GrHquXVlvBy2cQHJRa0d2QyAUEQgxNt14L6MIoOJ3neLR0KMwR6L+wpZFf97VBFmlVJ41CD4VGR62lwZgnp00T2U8a3qX7JOeFNToGvcKyzix47BF//6EWVg+VGusBdY/x9D2CvPoPmRPiLdTZCXN9pNJvTcN0xnLw42FzOGg6IBQxEJX6O7CZ4k6/4Qamjjj1rCv7q0IQMzIWxD63AuzjJyGMfCzL2zhp+yax6FMJxGsaHYUglfKGoM6M8ZKsbR/CQNHzAZMnauHiQ12KPEI0en36QzNutZcR8Skl+PQ1XsoGybDDT51iF4SDsMU/dRto/NeNVrQXfl60QHudF1qCmUF+7iJf4MWqxqREK7WaHNCChidAAIK+MNJAtiZfOLkK76DfpVmVOghUoT/1uc2ytjzs8AWx5IE8vU1llzo/AwtIBCynK6csozb25f9InW3S71EQXr6Pq57qcSsvbA7pjRoX+JnxBgcb96iMDzOJ/GnoWMV1cwrjklCzG6OCuwgDEDU2ojmdmWkpPwf6i2xbnXh6+NwHnLMPkiSGPUcSGscZ0IZa2svuzBmUPKaSN6bChlNhVfUr3qbteC3XFwKDpSxZpA8DIXo63sc5YMwKqEYk44vWNRRJUW2Ls+u58H5Q57inVwo571NEvqY5YIz2zqi4CFhftC94Y+jTnV1HzuT8opWrKMJRhvqrk+peA9s83ycnP/hCvoQkH0oSc70wRzA/KV80Pa0K3sqqZwQ6MHNHGeprSlp2MZu0KYv61kjxWo37/+rApNcOY0D/m8Mi4i7shFpU+9pWxjtcWTydtIr4qD7sAuDpQB2/lwNEZx0CdI9lflhxQRi6SX79ymi+rwu+ SomnTG56 /h7c3+HdV3XZ1de2M/DqyA5KkfymG4UsfNJKhSJ/Pff3pjSCrnIYy8bRuj3Qodya0DiWLBk4u6quvWD33WBPNcKxChk/JjnqACovkYRbT4S8AOD0rERrSsJad2LgI8kIN3Xkd5GimSXgX05dg0pwZPGpXwBIxauxF0XokHS4P5f7PttYrLO38QHIjmbwZ4zT/YH0nnZ+QZbpknuzxh18Redccbit4HomBY+JBVPmlx1/1E2kNxwnfrgkLvU3DkUirPdvta0Q0FAEs8lIJMYSy2iuOlDzvn2AvBc9muTlMSQaMz3PCt/Qh6ikmVVxw+/yLCn1i1uS6RF+tG/4e7BGpe1l7mjuuqHNY5emMcg8V+VEyrD+J6oKtVVHcE50ogwBoxYuV6dslQlJ2eJcTycZYRGbEBQYOB4yaU1oWBjz4DNeqqsdnxTznW8xCJi6ADYhwhmmiEgZOPFP50lhKscXAlbg91/M2XqE9rMRVryMDM8E/uOsSDvycdnV/L/DrUBVw68I4kOB1mLNLe+MkjXKk1gtm2ZYSWWWmLGoXT9SuPHUGMfz3teUvM2v1CBHPVu0cvNrn X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a test for new UFFDIO_REMAP ioctl which uses uffd to remaps source into destination buffer while checking the contents of both after remapping. After the operation the content of the destination buffer should match the original source buffer's content while the source buffer should be zeroed. Signed-off-by: Suren Baghdasaryan --- tools/testing/selftests/mm/uffd-common.c | 41 ++++++++++++- tools/testing/selftests/mm/uffd-common.h | 1 + tools/testing/selftests/mm/uffd-unit-tests.c | 62 ++++++++++++++++++++ 3 files changed, 102 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/mm/uffd-common.c b/tools/testing/selftests/mm/uffd-common.c index 02b89860e193..2a3ffd0ce96e 100644 --- a/tools/testing/selftests/mm/uffd-common.c +++ b/tools/testing/selftests/mm/uffd-common.c @@ -52,6 +52,13 @@ static int anon_allocate_area(void **alloc_area, bool is_src) *alloc_area = NULL; return -errno; } + + /* Prevent source pages from collapsing into THPs */ + if (madvise(*alloc_area, nr_pages * page_size, MADV_NOHUGEPAGE)) { + *alloc_area = NULL; + return -errno; + } + return 0; } @@ -484,8 +491,14 @@ void uffd_handle_page_fault(struct uffd_msg *msg, struct uffd_args *args) offset = (char *)(unsigned long)msg->arg.pagefault.address - area_dst; offset &= ~(page_size-1); - if (copy_page(uffd, offset, args->apply_wp)) - args->missing_faults++; + /* UFFD_REMAP is supported for anon non-shared mappings. */ + if (uffd_test_ops == &anon_uffd_test_ops && !map_shared) { + if (remap_page(uffd, offset)) + args->missing_faults++; + } else { + if (copy_page(uffd, offset, args->apply_wp)) + args->missing_faults++; + } } } @@ -620,6 +633,30 @@ int copy_page(int ufd, unsigned long offset, bool wp) return __copy_page(ufd, offset, false, wp); } +int remap_page(int ufd, unsigned long offset) +{ + struct uffdio_remap uffdio_remap; + + if (offset >= nr_pages * page_size) + err("unexpected offset %lu\n", offset); + uffdio_remap.dst = (unsigned long) area_dst + offset; + uffdio_remap.src = (unsigned long) area_src + offset; + uffdio_remap.len = page_size; + uffdio_remap.mode = UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES; + uffdio_remap.remap = 0; + if (ioctl(ufd, UFFDIO_REMAP, &uffdio_remap)) { + /* real retval in uffdio_remap.remap */ + if (uffdio_remap.remap != -EEXIST) + err("UFFDIO_REMAP error: %"PRId64, + (int64_t)uffdio_remap.remap); + wake_range(ufd, uffdio_remap.dst, page_size); + } else if (uffdio_remap.remap != page_size) { + err("UFFDIO_REMAP error: %"PRId64, (int64_t)uffdio_remap.remap); + } else + return 1; + return 0; +} + int uffd_open_dev(unsigned int flags) { int fd, uffd; diff --git a/tools/testing/selftests/mm/uffd-common.h b/tools/testing/selftests/mm/uffd-common.h index 7c4fa964c3b0..2bbb15d1920c 100644 --- a/tools/testing/selftests/mm/uffd-common.h +++ b/tools/testing/selftests/mm/uffd-common.h @@ -111,6 +111,7 @@ void wp_range(int ufd, __u64 start, __u64 len, bool wp); void uffd_handle_page_fault(struct uffd_msg *msg, struct uffd_args *args); int __copy_page(int ufd, unsigned long offset, bool retry, bool wp); int copy_page(int ufd, unsigned long offset, bool wp); +int remap_page(int ufd, unsigned long offset); void *uffd_poll_thread(void *arg); int uffd_open_dev(unsigned int flags); diff --git a/tools/testing/selftests/mm/uffd-unit-tests.c b/tools/testing/selftests/mm/uffd-unit-tests.c index 2709a34a39c5..a33819639187 100644 --- a/tools/testing/selftests/mm/uffd-unit-tests.c +++ b/tools/testing/selftests/mm/uffd-unit-tests.c @@ -824,6 +824,10 @@ static void uffd_events_test_common(bool wp) char c; struct uffd_args args = { 0 }; + /* Prevent source pages from being mapped more than once */ + if (madvise(area_src, nr_pages * page_size, MADV_DONTFORK)) + err("madvise(MADV_DONTFORK) failed"); + fcntl(uffd, F_SETFL, uffd_flags | O_NONBLOCK); if (uffd_register(uffd, area_dst, nr_pages * page_size, true, wp, false)) @@ -1062,6 +1066,58 @@ static void uffd_poison_test(uffd_test_args_t *targs) uffd_test_pass(); } +static void uffd_remap_test(uffd_test_args_t *targs) +{ + unsigned long nr; + pthread_t uffd_mon; + char c; + unsigned long long count; + struct uffd_args args = { 0 }; + + if (uffd_register(uffd, area_dst, nr_pages * page_size, + true, false, false)) + err("register failure"); + + if (pthread_create(&uffd_mon, NULL, uffd_poll_thread, &args)) + err("uffd_poll_thread create"); + + /* + * Read each of the pages back using the UFFD-registered mapping. We + * expect that the first time we touch a page, it will result in a missing + * fault. uffd_poll_thread will resolve the fault by remapping source + * page to destination. + */ + for (nr = 0; nr < nr_pages; nr++) { + /* Check area_src content */ + count = *area_count(area_src, nr); + if (count != count_verify[nr]) + err("nr %lu source memory invalid %llu %llu\n", + nr, count, count_verify[nr]); + + /* Faulting into area_dst should remap the page */ + count = *area_count(area_dst, nr); + if (count != count_verify[nr]) + err("nr %lu memory corruption %llu %llu\n", + nr, count, count_verify[nr]); + + /* Re-check area_src content which should be empty */ + count = *area_count(area_src, nr); + if (count != 0) + err("nr %lu remap failed %llu %llu\n", + nr, count, count_verify[nr]); + } + + if (write(pipefd[1], &c, sizeof(c)) != sizeof(c)) + err("pipe write"); + if (pthread_join(uffd_mon, NULL)) + err("join() failed"); + + if (args.missing_faults != nr_pages || args.minor_faults != 0) + uffd_test_fail("stats check error"); + else + uffd_test_pass(); +} + /* * Test the returned uffdio_register.ioctls with different register modes. * Note that _UFFDIO_ZEROPAGE is tested separately in the zeropage test. @@ -1139,6 +1195,12 @@ uffd_test_case_t uffd_tests[] = { .mem_targets = MEM_ALL, .uffd_feature_required = 0, }, + { + .name = "remap", + .uffd_fn = uffd_remap_test, + .mem_targets = MEM_ANON, + .uffd_feature_required = 0, + }, { .name = "wp-fork", .uffd_fn = uffd_wp_fork_test,