From patchwork Thu Feb 23 00:57:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 13149747 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB5DEC61DA4 for ; Thu, 23 Feb 2023 00:58:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 575B26B0075; Wed, 22 Feb 2023 19:58:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4D8246B0078; Wed, 22 Feb 2023 19:58:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 32B206B007B; Wed, 22 Feb 2023 19:58:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 1EBE36B0075 for ; Wed, 22 Feb 2023 19:58:10 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id E2AA5A0DA2 for ; Thu, 23 Feb 2023 00:58:09 +0000 (UTC) X-FDA: 80496745098.16.90C7265 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) by imf10.hostedemail.com (Postfix) with ESMTP id 1BC32C000D for ; Thu, 23 Feb 2023 00:58:07 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=oAkFvcRU; spf=pass (imf10.hostedemail.com: domain of 3H7r2Yw0KCJQyL29FyGAIGG2B4CC492.0CA96BIL-AA8Jy08.CF4@flex--axelrasmussen.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3H7r2Yw0KCJQyL29FyGAIGG2B4CC492.0CA96BIL-AA8Jy08.CF4@flex--axelrasmussen.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677113888; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=XYvkJikrVfRkjAsoLRzcOofnxYsgz4Gx93yNax8xwTM=; b=ui95kZyxlGlvXRG6hezOZd8QNiMsvO//iiI6jrujaOTIysQ5jNIqYQ9MffLvQkUz+qX/Db oO3i8bCOZA9X1ObWvcs6XiZzn6H3oHrPuxLwVSLsU9UFANAeZ87lNWSQuSvES0aqEKBbQT fRXwK1XuXrrQ037xjsF7QxsF/+y0C7Y= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=oAkFvcRU; spf=pass (imf10.hostedemail.com: domain of 3H7r2Yw0KCJQyL29FyGAIGG2B4CC492.0CA96BIL-AA8Jy08.CF4@flex--axelrasmussen.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3H7r2Yw0KCJQyL29FyGAIGG2B4CC492.0CA96BIL-AA8Jy08.CF4@flex--axelrasmussen.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677113888; a=rsa-sha256; cv=none; b=w2atuHBg0OkrE1bT3y9l23Mu6ruHSbpZ11Wzwnkx7wGqk6WB4IZ31AKfkAfqoay4S+v4UC AT4SspVXQZFZmCJyGWZHYMWqv+und6Gg+iy3YFEuAr3Zvlj6A54Z1dazf00Yq8g2fkM64/ oCnXez4gQFL6hB/BQgGxqowsf5rlg4Y= Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-536cb268ab8so89851607b3.17 for ; Wed, 22 Feb 2023 16:58:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=XYvkJikrVfRkjAsoLRzcOofnxYsgz4Gx93yNax8xwTM=; b=oAkFvcRUZRbBfBexlIuZrzZ0B6uz1rZvppIPcAKlY9l1KOmS+1j7ulyHCbXpH3mLcP jPeFzDY9mfQqdVPoBJFa7k6TSAOgyIyRS9leF7lqARFwpV0WD/YDYlcmsi31uImkql8W J70rlkHO65u2TIMGugSLnEg9XeVZ/fgmH0ZPGREMBi8KeUMK7mxGIXZlcC35sY+VQ2v4 hVyUSIgiZIXG9FaaRQqyw7iJp189c5+UQhowho7jHnih5SKndusUUoiuRTt/EGlDZpVP 2JDn7E3fRsJ3JqYgiNpMlXinsEnrVqZC8wz+xid+dMDDnRhEz4y1LpNZEfmK5jU6aSCL Pxaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=XYvkJikrVfRkjAsoLRzcOofnxYsgz4Gx93yNax8xwTM=; b=T03i+KlY1sLEblYzSJAO3XHXUKwr0vUodl2wJEyjQVTesQHKVOs0KLLIMYubNfg+hy 3hyduHEVbnE7CCaqOdN7r7yj/JJeTA73TVmNytHrnz9NGFUqgU4qQ4EXXIiyHir/NMjd CH+DtryU6CtklAA40ayaJTAgcgJ/WzA32tZG3w7JQKJLyHqJPF+1Zc5XDnWhlNZwMt6P D4adZ9A+e+Z13P//t/z7I8eFV2aN5h3h1ATAtLBX36cstogo/+PUvlB5VuTvwXDqZ5uz aboyZi9vWw+PiQjMR3X4n0w4IhkE9ymwtDcqNaLXAV5esu7gWkKBMcSxuicmT1FHrMah 2oYw== X-Gm-Message-State: AO0yUKVmYjOwF4GQcirzPzWVnPe7jgCc9Ez56Ow2nm46cxZ0GvE1BR8L Q9cCI+olP7qdXyde80qzFuzOEBVp/4OznFjwGOGX X-Google-Smtp-Source: AK7set91uajqgfLhiGUOQ4hgaeazw52oRkA5ceqRj17MRktBZi/1A1MsqoBszMqKO22Dtropn8Jg0Yj9+V5QIx70VPva X-Received: from axel.svl.corp.google.com ([2620:15c:2d4:203:3e99:dd9e:9db9:449c]) (user=axelrasmussen job=sendgmr) by 2002:a81:af5d:0:b0:52e:d380:ab14 with SMTP id x29-20020a81af5d000000b0052ed380ab14mr1497821ywj.3.1677113887212; Wed, 22 Feb 2023 16:58:07 -0800 (PST) Date: Wed, 22 Feb 2023 16:57:52 -0800 In-Reply-To: <20230223005754.2700663-1-axelrasmussen@google.com> Mime-Version: 1.0 References: <20230223005754.2700663-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.39.2.637.g21b0678d19-goog Message-ID: <20230223005754.2700663-4-axelrasmussen@google.com> Subject: [PATCH v2 3/5] mm: userfaultfd: combine 'mode' and 'wp_copy' arguments From: Axel Rasmussen To: Alexander Viro , Andrew Morton , Hugh Dickins , Jan Kara , "Liam R. Howlett" , Matthew Wilcox , Mike Kravetz , Mike Rapoport , Muchun Song , Nadav Amit , Peter Xu , Shuah Khan Cc: James Houghton , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, Axel Rasmussen X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 1BC32C000D X-Rspam-User: X-Stat-Signature: rqw3hk9tcy4cd7gx9o3kxuiz6si13qwy X-HE-Tag: 1677113887-51207 X-HE-Meta: U2FsdGVkX198bihqHkxq1PGC09CfUvd1A9AgaLF05XZC3ZJmYzbg5fmjyrz0WpSkAixaCbVOij2fsRaGfW+DSiex8eCA92sEbw/0gH+zzW0bLKi0cGfm1BNHeZhk/4cKNwWebmuZzzksyMXNah1W7aNQpgA2TOI1Q85qmpfFL/rVpRrmvwN8BonUv3dkM8lNVbBb97OjQX9N5QegxF9wvtg7YqkMe362nrFINvE+H8i6qE3dbDMdFoUPfmNHqJH3FGCCy76fmdAMXjq6IOC0kA5ct3Co4R3tvDGQrCCkbid0zHQtJnIMtgI1OYSy+DO9P6vIXHn8SI4nYiUsajNpl//r8GzUTyxPRPbPBDYTOdhqMDGam5id2IEcDgytaIzQerF3Z1pOPvLEqROTjFsHqggKoCrCP9pgoKtpqTXPKgaumxyZMeeGic454tZoD3jfzazzwDi48WJmnZY72OOa0OTtGeiNTsmy2R/ufvcbTadNx1JFV0efjOX+FOPw+AaA/mbH/D0o5Tlk2+Aao7SU58b7OQeV2D1C6zAkYfaDDNnbPcjHGYROGUezSaUE0RIowv9KxJWxkWF7vUoEXwmrMoUy+y8vM3Xu6bLTDVVQtKMHZUlbTdaE92HwwPKzRN4abBDDsxyPzfUfrc9486e3eU9hImZHSvAu5q5NRyl+4bzOOWWxAHpMKpuJGBVoFD+MASR2Yv49Kb2NeTyh39WKcOQLvkGWrtqVLfSCQlWeTPsi3eDf6wpzRyNeEPLMFf/Jt6uEWVmn7V2pURBwuDTH6mIBa8t5mUDM+KA469yJh5MNoMxXROJpgDdT8tv8pRIEDTS9aWWPlK3+qk1+4Rho3Jl5VnaY0dgZQtRCAyWAADn/1TivUyZf9KMIpKXUIt4IC7OZ+g54H6DRbe+2EqvtiTLkWJuYMVxLNM7KR8kEmVe2Ww7oiTwt6VEmvI81QqHWlb625xEyYxb45Uj6VZV xct+Xt5U aGzpxx7Is1eVvh2bMHYW4jnZZC6dFYzDfIdDoTpKeky8dwSr1D6z8pOJwtl5ypLBPgyf1vA7FTOh4QH23/DO7OER+Lobs5k+6hCUmFtPlbBOy9WMvG/WWYBCmuWj93xi6ryySIay+lDphaJMm97UKyxWYHUKgCEJIBdB8RrUlBdBZbGD6Gw/BhbLS+27+xPrW1+PGB8ZIkIoK4FQ7PsSHVw/r5IfmkRpOR0pVdUgdDFsW9LWRfpGoWho/mJzgLX2dZTjCyzwa6tfZ4ZYGfeuukFWkA1HX3SYNkLGC8GiMcl8S3J0FPmYVwBz2vpYH9b4ug9p3qgm+WzTssXVL1AjID/5t0xgU8g80JBjH9l5h+sAVCxzzZxuFjCsMmX9KE+69KipUbp/xQJjgg7uVN1L/YBvzMjzaRXThE3vZuxxmS6I7bA7Q1McebvaAF59NiaOgaahZnxeHYeje/NQT5j2jNH4BeCTqAPcnbPnbub/nSIVWZhLtsZaXF7Q52ev6LK4qLerKV0zwp/pAP+RCfoZPJM3wxlawAuWT+P9W2fr5cqNdrbLqhWZ9By3SqzbZ7Yn9sKbQJsGCQvu2yeK5PaCv5/H47rYHd+SjGyS3K+/3+ghC44WFWZFr+rjYxUBo8lu4xJOBrFKpc6nfS7a+AbWL2jG7mS6g+TU2mLF3xuhoioyHddgFevc1zMH3S7YaGHr8tpcoQIwThVGjUePAejexZx6/SkeTYjrOtWG+ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Many userfaultfd ioctl functions take both a 'mode' and a 'wp_copy' argument. In future commits we plan to plumb the flags through to more places, so we'd be proliferating the very long argument list even further. Let's take the time to simplify the argument list. Combine the two arguments into one - and generalize, so when we add more flags in the future, it doesn't imply more function arguments. Since the modes (copy, zeropage, continue) are mutually exclusive, store them as an integer value (0, 1, 2) in the low bits. Place combine-able flag bits in the high bits. Signed-off-by: Axel Rasmussen Acked-by: James Houghton --- fs/userfaultfd.c | 5 ++- include/linux/hugetlb.h | 11 ++--- include/linux/shmem_fs.h | 4 +- include/linux/userfaultfd_k.h | 30 +++++++------- mm/hugetlb.c | 14 ++++--- mm/shmem.c | 6 +-- mm/userfaultfd.c | 76 ++++++++++++++++------------------- 7 files changed, 70 insertions(+), 76 deletions(-) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index a95f6aaef76b..2db15a5e3224 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -1725,6 +1725,7 @@ static int userfaultfd_copy(struct userfaultfd_ctx *ctx, struct uffdio_copy uffdio_copy; struct uffdio_copy __user *user_uffdio_copy; struct userfaultfd_wake_range range; + int flags = 0; user_uffdio_copy = (struct uffdio_copy __user *) arg; @@ -1751,10 +1752,12 @@ static int userfaultfd_copy(struct userfaultfd_ctx *ctx, goto out; if (uffdio_copy.mode & ~(UFFDIO_COPY_MODE_DONTWAKE|UFFDIO_COPY_MODE_WP)) goto out; + if (uffdio_copy.mode & UFFDIO_COPY_MODE_WP) + flags |= MFILL_ATOMIC_WP; if (mmget_not_zero(ctx->mm)) { ret = mfill_atomic_copy(ctx->mm, uffdio_copy.dst, uffdio_copy.src, uffdio_copy.len, &ctx->mmap_changing, - uffdio_copy.mode); + flags); mmput(ctx->mm); } else { return -ESRCH; diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index d3fc104aab78..1e66a75b4da4 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -12,7 +12,6 @@ #include #include #include -#include struct ctl_table; struct user_struct; @@ -161,9 +160,8 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, struct vm_area_struct *dst_vma, unsigned long dst_addr, unsigned long src_addr, - enum mcopy_atomic_mode mode, - struct page **pagep, - bool wp_copy); + int mode_flags, + struct page **pagep); #endif /* CONFIG_USERFAULTFD */ bool hugetlb_reserve_pages(struct inode *inode, long from, long to, struct vm_area_struct *vma, @@ -359,9 +357,8 @@ static inline int hugetlb_mfill_atomic_pte(pte_t *dst_pte, struct vm_area_struct *dst_vma, unsigned long dst_addr, unsigned long src_addr, - enum mcopy_atomic_mode mode, - struct page **pagep, - bool wp_copy) + int mode_flags, + struct page **pagep) { BUG(); return 0; diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h index 2a0b1dc0460f..6bbb243716f3 100644 --- a/include/linux/shmem_fs.h +++ b/include/linux/shmem_fs.h @@ -153,11 +153,11 @@ extern int shmem_mfill_atomic_pte(pmd_t *dst_pmd, struct vm_area_struct *dst_vma, unsigned long dst_addr, unsigned long src_addr, - bool zeropage, bool wp_copy, + int mode_flags, struct page **pagep); #else /* !CONFIG_SHMEM */ #define shmem_mfill_atomic_pte(dst_pmd, dst_vma, dst_addr, \ - src_addr, zeropage, wp_copy, pagep) ({ BUG(); 0; }) + src_addr, mode_flags, pagep) ({ BUG(); 0; }) #endif /* CONFIG_SHMEM */ #endif /* CONFIG_USERFAULTFD */ diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h index c6c23408d300..185024128e0f 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -40,30 +40,28 @@ extern int sysctl_unprivileged_userfaultfd; extern vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason); -/* - * The mode of operation for __mcopy_atomic and its helpers. - * - * This is almost an implementation detail (mcopy_atomic below doesn't take this - * as a parameter), but it's exposed here because memory-kind-specific - * implementations (e.g. hugetlbfs) need to know the mode of operation. - */ -enum mcopy_atomic_mode { - /* A normal copy_from_user into the destination range. */ - MCOPY_ATOMIC_NORMAL, - /* Don't copy; map the destination range to the zero page. */ - MCOPY_ATOMIC_ZEROPAGE, - /* Just install pte(s) with the existing page(s) in the page cache. */ - MCOPY_ATOMIC_CONTINUE, +/* Mutually exclusive modes of operation. */ +enum mfill_atomic_mode { + MFILL_ATOMIC_COPY, + MFILL_ATOMIC_ZEROPAGE, + MFILL_ATOMIC_CONTINUE, + NR_MFILL_ATOMIC_MODES, }; +#define MFILL_ATOMIC_MODE_BITS (const_ilog2(NR_MFILL_ATOMIC_MODES - 1) + 1) +#define MFILL_ATOMIC_MODE_MASK (BIT(MFILL_ATOMIC_MODE_BITS) - 1) + +/* Flags controlling behavior. */ +#define MFILL_ATOMIC_WP BIT(MFILL_ATOMIC_MODE_BITS + 0) + extern int mfill_atomic_install_pte(pmd_t *dst_pmd, struct vm_area_struct *dst_vma, unsigned long dst_addr, struct page *page, - bool newly_allocated, bool wp_copy); + bool newly_allocated, int mode_flags); extern ssize_t mfill_atomic_copy(struct mm_struct *dst_mm, unsigned long dst_start, unsigned long src_start, unsigned long len, - atomic_t *mmap_changing, __u64 mode); + atomic_t *mmap_changing, int flags); extern ssize_t mfill_atomic_zeropage(struct mm_struct *dst_mm, unsigned long dst_start, unsigned long len, diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 0afd2ed8ad39..7fc4f529b4d7 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -34,6 +34,7 @@ #include #include #include +#include #include #include @@ -6166,11 +6167,12 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, struct vm_area_struct *dst_vma, unsigned long dst_addr, unsigned long src_addr, - enum mcopy_atomic_mode mode, - struct page **pagep, - bool wp_copy) + int mode_flags, + struct page **pagep) { - bool is_continue = (mode == MCOPY_ATOMIC_CONTINUE); + int mode = mode_flags & MFILL_ATOMIC_MODE_MASK; + bool is_continue = (mode == MFILL_ATOMIC_CONTINUE); + bool wp_enabled = (mode_flags & MFILL_ATOMIC_WP); struct hstate *h = hstate_vma(dst_vma); struct address_space *mapping = dst_vma->vm_file->f_mapping; pgoff_t idx = vma_hugecache_offset(h, dst_vma, dst_addr); @@ -6305,7 +6307,7 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, * For either: (1) CONTINUE on a non-shared VMA, or (2) UFFDIO_COPY * with wp flag set, don't set pte write bit. */ - if (wp_copy || (is_continue && !vm_shared)) + if (wp_enabled || (is_continue && !vm_shared)) writable = 0; else writable = dst_vma->vm_flags & VM_WRITE; @@ -6320,7 +6322,7 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, _dst_pte = huge_pte_mkdirty(_dst_pte); _dst_pte = pte_mkyoung(_dst_pte); - if (wp_copy) + if (wp_enabled) _dst_pte = huge_pte_mkuffd_wp(_dst_pte); set_huge_pte_at(dst_vma->vm_mm, dst_addr, dst_pte, _dst_pte); diff --git a/mm/shmem.c b/mm/shmem.c index cc03c61190eb..98c9c1f08389 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2402,7 +2402,7 @@ int shmem_mfill_atomic_pte(pmd_t *dst_pmd, struct vm_area_struct *dst_vma, unsigned long dst_addr, unsigned long src_addr, - bool zeropage, bool wp_copy, + int mode_flags, struct page **pagep) { struct inode *inode = file_inode(dst_vma->vm_file); @@ -2434,7 +2434,7 @@ int shmem_mfill_atomic_pte(pmd_t *dst_pmd, if (!folio) goto out_unacct_blocks; - if (!zeropage) { /* COPY */ + if ((mode_flags & MFILL_ATOMIC_MODE_MASK) == MFILL_ATOMIC_COPY) { page_kaddr = kmap_local_folio(folio, 0); /* * The read mmap_lock is held here. Despite the @@ -2493,7 +2493,7 @@ int shmem_mfill_atomic_pte(pmd_t *dst_pmd, goto out_release; ret = mfill_atomic_install_pte(dst_pmd, dst_vma, dst_addr, - &folio->page, true, wp_copy); + &folio->page, true, mode_flags); if (ret) goto out_delete_from_cache; diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 4bf5c97c665a..7882e4c60f60 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -58,7 +58,7 @@ struct vm_area_struct *find_dst_vma(struct mm_struct *dst_mm, int mfill_atomic_install_pte(pmd_t *dst_pmd, struct vm_area_struct *dst_vma, unsigned long dst_addr, struct page *page, - bool newly_allocated, bool wp_copy) + bool newly_allocated, int mode_flags) { int ret; pte_t _dst_pte, *dst_pte; @@ -79,7 +79,7 @@ int mfill_atomic_install_pte(pmd_t *dst_pmd, * Always mark a PTE as write-protected when needed, regardless of * VM_WRITE, which the user might change. */ - if (wp_copy) { + if (mode_flags & MFILL_ATOMIC_WP) { _dst_pte = pte_mkuffd_wp(_dst_pte); writable = false; } @@ -145,8 +145,8 @@ static int mfill_atomic_pte_copy(pmd_t *dst_pmd, struct vm_area_struct *dst_vma, unsigned long dst_addr, unsigned long src_addr, - struct page **pagep, - bool wp_copy) + int mode_flags, + struct page **pagep) { void *page_kaddr; int ret; @@ -207,7 +207,7 @@ static int mfill_atomic_pte_copy(pmd_t *dst_pmd, goto out_release; ret = mfill_atomic_install_pte(dst_pmd, dst_vma, dst_addr, - page, true, wp_copy); + page, true, mode_flags); if (ret) goto out_release; out: @@ -255,7 +255,7 @@ static int mfill_atomic_pte_zeropage(pmd_t *dst_pmd, static int mfill_atomic_pte_continue(pmd_t *dst_pmd, struct vm_area_struct *dst_vma, unsigned long dst_addr, - bool wp_copy) + int mode_flags) { struct inode *inode = file_inode(dst_vma->vm_file); pgoff_t pgoff = linear_page_index(dst_vma, dst_addr); @@ -281,7 +281,7 @@ static int mfill_atomic_pte_continue(pmd_t *dst_pmd, } ret = mfill_atomic_install_pte(dst_pmd, dst_vma, dst_addr, - page, false, wp_copy); + page, false, mode_flags); if (ret) goto out_release; @@ -326,9 +326,9 @@ static __always_inline ssize_t mfill_atomic_hugetlb( unsigned long dst_start, unsigned long src_start, unsigned long len, - enum mcopy_atomic_mode mode, - bool wp_copy) + int mode_flags) { + int mode = mode_flags & MFILL_ATOMIC_MODE_MASK; struct mm_struct *dst_mm = dst_vma->vm_mm; int vm_shared = dst_vma->vm_flags & VM_SHARED; ssize_t err; @@ -347,7 +347,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( * by THP. Since we can not reliably insert a zero page, this * feature is not supported. */ - if (mode == MCOPY_ATOMIC_ZEROPAGE) { + if (mode == MFILL_ATOMIC_ZEROPAGE) { mmap_read_unlock(dst_mm); return -EINVAL; } @@ -415,7 +415,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( goto out_unlock; } - if (mode != MCOPY_ATOMIC_CONTINUE && + if (mode != MFILL_ATOMIC_CONTINUE && !huge_pte_none_mostly(huge_ptep_get(dst_pte))) { err = -EEXIST; hugetlb_vma_unlock_read(dst_vma); @@ -423,9 +423,8 @@ static __always_inline ssize_t mfill_atomic_hugetlb( goto out_unlock; } - err = hugetlb_mfill_atomic_pte(dst_pte, dst_vma, - dst_addr, src_addr, mode, &page, - wp_copy); + err = hugetlb_mfill_atomic_pte(dst_pte, dst_vma, dst_addr, + src_addr, mode_flags, &page); hugetlb_vma_unlock_read(dst_vma); mutex_unlock(&hugetlb_fault_mutex_table[hash]); @@ -479,23 +478,22 @@ extern ssize_t mfill_atomic_hugetlb(struct vm_area_struct *dst_vma, unsigned long dst_start, unsigned long src_start, unsigned long len, - enum mcopy_atomic_mode mode, - bool wp_copy); + int mode_flags); #endif /* CONFIG_HUGETLB_PAGE */ static __always_inline ssize_t mfill_atomic_pte(pmd_t *dst_pmd, struct vm_area_struct *dst_vma, unsigned long dst_addr, unsigned long src_addr, - struct page **page, - enum mcopy_atomic_mode mode, - bool wp_copy) + struct page **pagep, + int mode_flags) { + int mode = mode_flags & MFILL_ATOMIC_MODE_MASK; ssize_t err; - if (mode == MCOPY_ATOMIC_CONTINUE) { + if (mode == MFILL_ATOMIC_CONTINUE) { return mfill_atomic_pte_continue(dst_pmd, dst_vma, - dst_addr, wp_copy); + dst_addr, mode_flags); } /* @@ -509,18 +507,17 @@ static __always_inline ssize_t mfill_atomic_pte(pmd_t *dst_pmd, * and not in the radix tree. */ if (!(dst_vma->vm_flags & VM_SHARED)) { - if (mode == MCOPY_ATOMIC_NORMAL) + if (mode == MFILL_ATOMIC_COPY) err = mfill_atomic_pte_copy(dst_pmd, dst_vma, - dst_addr, src_addr, page, - wp_copy); + dst_addr, src_addr, + mode_flags, pagep); else err = mfill_atomic_pte_zeropage(dst_pmd, dst_vma, dst_addr); } else { err = shmem_mfill_atomic_pte(dst_pmd, dst_vma, dst_addr, src_addr, - mode != MCOPY_ATOMIC_NORMAL, - wp_copy, page); + mode_flags, pagep); } return err; @@ -530,9 +527,8 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, unsigned long dst_start, unsigned long src_start, unsigned long len, - enum mcopy_atomic_mode mcopy_mode, atomic_t *mmap_changing, - __u64 mode) + int mode_flags) { struct vm_area_struct *dst_vma; ssize_t err; @@ -540,7 +536,6 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, unsigned long src_addr, dst_addr; long copied; struct page *page; - bool wp_copy; /* * Sanitize the command parameters: @@ -590,8 +585,7 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, * validate 'mode' now that we know the dst_vma: don't allow * a wrprotect copy if the userfaultfd didn't register as WP. */ - wp_copy = mode & UFFDIO_COPY_MODE_WP; - if (wp_copy && !(dst_vma->vm_flags & VM_UFFD_WP)) + if ((mode_flags & MFILL_ATOMIC_WP) && !(dst_vma->vm_flags & VM_UFFD_WP)) goto out_unlock; /* @@ -599,12 +593,12 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, */ if (is_vm_hugetlb_page(dst_vma)) return mfill_atomic_hugetlb(dst_vma, dst_start, - src_start, len, mcopy_mode, - wp_copy); + src_start, len, mode_flags); if (!vma_is_anonymous(dst_vma) && !vma_is_shmem(dst_vma)) goto out_unlock; - if (!vma_is_shmem(dst_vma) && mcopy_mode == MCOPY_ATOMIC_CONTINUE) + if (!vma_is_shmem(dst_vma) && + (mode_flags & MFILL_ATOMIC_MODE_MASK) == MFILL_ATOMIC_CONTINUE) goto out_unlock; /* @@ -652,7 +646,7 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, BUG_ON(pmd_trans_huge(*dst_pmd)); err = mfill_atomic_pte(dst_pmd, dst_vma, dst_addr, - src_addr, &page, mcopy_mode, wp_copy); + src_addr, &page, mode_flags); cond_resched(); if (unlikely(err == -ENOENT)) { @@ -700,24 +694,24 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, ssize_t mfill_atomic_copy(struct mm_struct *dst_mm, unsigned long dst_start, unsigned long src_start, unsigned long len, - atomic_t *mmap_changing, __u64 mode) + atomic_t *mmap_changing, int flags) { return mfill_atomic(dst_mm, dst_start, src_start, len, - MCOPY_ATOMIC_NORMAL, mmap_changing, mode); + mmap_changing, flags | MFILL_ATOMIC_COPY); } ssize_t mfill_atomic_zeropage(struct mm_struct *dst_mm, unsigned long start, unsigned long len, atomic_t *mmap_changing) { - return mfill_atomic(dst_mm, start, 0, len, MCOPY_ATOMIC_ZEROPAGE, - mmap_changing, 0); + return mfill_atomic(dst_mm, start, 0, len, + mmap_changing, MFILL_ATOMIC_ZEROPAGE); } ssize_t mfill_atomic_continue(struct mm_struct *dst_mm, unsigned long start, unsigned long len, atomic_t *mmap_changing) { - return mfill_atomic(dst_mm, start, 0, len, MCOPY_ATOMIC_CONTINUE, - mmap_changing, 0); + return mfill_atomic(dst_mm, start, 0, len, + mmap_changing, MFILL_ATOMIC_CONTINUE); } void uffd_wp_range(struct vm_area_struct *dst_vma,