From patchwork Tue Apr 13 05:17:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 12199303 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A86CBC43462 for ; Tue, 13 Apr 2021 05:17:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2FC146100C for ; Tue, 13 Apr 2021 05:17:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2FC146100C Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C480B6B006E; Tue, 13 Apr 2021 01:17:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BF91F6B0070; Tue, 13 Apr 2021 01:17:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A72546B0071; Tue, 13 Apr 2021 01:17:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0156.hostedemail.com [216.40.44.156]) by kanga.kvack.org (Postfix) with ESMTP id 823FC6B006E for ; Tue, 13 Apr 2021 01:17:28 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 34CA43656 for ; Tue, 13 Apr 2021 05:17:28 +0000 (UTC) X-FDA: 78026185776.06.9DFF9B5 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by imf11.hostedemail.com (Postfix) with ESMTP id 5DA54200025D for ; Tue, 13 Apr 2021 05:17:19 +0000 (UTC) Received: by mail-yb1-f201.google.com with SMTP id v2so12899788ybc.17 for ; Mon, 12 Apr 2021 22:17:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=7IsD7HjruBMaXlDcpK+xWb/mDcOHfsbhyhaTVcaMXIo=; b=asitBK9IDUdqhqBMnTf3/VipUC7M2aaIQQs7sv0xyYYt371OtKGXrONa+F0lx1P+KT KTpoEb+KheSn7A9cy2oB31SQFh6akvHSYhTE1TiV9A77EkJFEoVDjaBgoo5FELWdcFmI TvO0F5s/1dS8PioNTfEBj0UqNw7Qsn7v2iw5ftkyvi4gt+5Bquw2aftPfrKY3F8rLJa7 vv1gbn4LzDC9Zr4VGIQWEz7FPYmc3H8R50tG/LGiydvL/QBd+jACJ2eXXWNISn2dC6nN IAwP45gQULqUkHMFlpqQX8hxKqR5B8QpyTNw0a7F1sCV/nqmkMRDPvR+CPRV1fbjctU+ dv2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=7IsD7HjruBMaXlDcpK+xWb/mDcOHfsbhyhaTVcaMXIo=; b=Q6/V6/cYHK4tL7p0XxcFD5wlO9+6AE10JhlLrvFVaufR3qq93Vjvpx298XOSx8E3tt FPss92SdZIw18Ryhtzlr5UTWTXpDEFbcsgGOXG5mRHkcfvRP/BircM/uGYHrojRPC7f/ /U4iWMoQB32l1XEUsuvylaJUY2n5lEctB3xl+ow/4k+Zq2Vjc6LnfK13jCz+s7UnboAt 8uzQtDAdmLPrc2OaThvD/H2UFPsMY3eC1dmiU7zeeo2EUb8VVD48FNvAl7gTn0fuxG8v bJDm2mT0vWRhHP8Gp/pUugA7fqlyhxH2G8BuqXxruOwTHP9jHiH8BKOcIQRI7v8bksfB Ra+A== X-Gm-Message-State: AOAM531/iEgoAzTJFAb3/fz+kgcSQZ5s1PvBOXdLST/Q6R1p/Gj8p+PT MhZddmfJv+BahhiNFP/4NeV9I22g26X7fGYQI2EX X-Google-Smtp-Source: ABdhPJyz9QBQ88AmwHLvA8FDzBJ9G2LsvkkvXa8PeJPYpxpPPFJBCAj1fQFfNhZcwUrzO1eaxHBhkrIST0IYRA6i27qb X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:d508:eee5:2d57:3e32]) (user=axelrasmussen job=sendgmr) by 2002:a05:6902:1024:: with SMTP id x4mr23587740ybt.13.1618291046988; Mon, 12 Apr 2021 22:17:26 -0700 (PDT) Date: Mon, 12 Apr 2021 22:17:13 -0700 In-Reply-To: <20210413051721.2896915-1-axelrasmussen@google.com> Message-Id: <20210413051721.2896915-2-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210413051721.2896915-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.31.1.295.g9ea45b61b8-goog Subject: [PATCH v2 1/9] userfaultfd/hugetlbfs: avoid including userfaultfd_k.h in hugetlb.h From: Axel Rasmussen To: Alexander Viro , Andrea Arcangeli , Andrew Morton , Hugh Dickins , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Kravetz , Mike Rapoport , Peter Xu , Shaohua Li , Shuah Khan , Stephen Rothwell , Wang Qing Cc: linux-api@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, Axel Rasmussen , Brian Geffon , "Dr . David Alan Gilbert" , Mina Almasry , Oliver Upton X-Stat-Signature: yfmx41cutkoef1iuke9yx6rtzskadnk8 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 5DA54200025D Received-SPF: none (flex--axelrasmussen.bounces.google.com>: No applicable sender policy available) receiver=imf11; identity=mailfrom; envelope-from="<3Zil1YA0KCJc1O5CI1JDLJJ5E7FF7C5.3FDC9ELO-DDBM13B.FI7@flex--axelrasmussen.bounces.google.com>"; helo=mail-yb1-f201.google.com; client-ip=209.85.219.201 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1618291039-742625 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Minimizing header file inclusion is desirable. In this case, we can do so just by forward declaring the enumeration our signature relies upon. Reviewed-by: Peter Xu Signed-off-by: Axel Rasmussen --- include/linux/hugetlb.h | 4 +++- mm/hugetlb.c | 1 + 2 files changed, 4 insertions(+), 1 deletion(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 09f1fd12a6fa..3f47650ab79b 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -11,7 +11,6 @@ #include #include #include -#include struct ctl_table; struct user_struct; @@ -135,6 +134,8 @@ void hugetlb_show_meminfo(void); unsigned long hugetlb_total_pages(void); vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long address, unsigned int flags); + +enum mcopy_atomic_mode; #ifdef CONFIG_USERFAULTFD int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm, pte_t *dst_pte, struct vm_area_struct *dst_vma, @@ -143,6 +144,7 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm, pte_t *dst_pte, enum mcopy_atomic_mode mode, struct page **pagep); #endif /* CONFIG_USERFAULTFD */ + bool hugetlb_reserve_pages(struct inode *inode, long from, long to, struct vm_area_struct *vma, vm_flags_t vm_flags); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 54d81d5947ed..b1652e747318 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -40,6 +40,7 @@ #include #include #include +#include #include "internal.h" int hugetlb_max_hstate __read_mostly; From patchwork Tue Apr 13 05:17:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 12199305 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 791D4C43460 for ; Tue, 13 Apr 2021 05:17:31 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0D4D56109E for ; Tue, 13 Apr 2021 05:17:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0D4D56109E Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 93E056B0070; Tue, 13 Apr 2021 01:17:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8A3B66B0071; Tue, 13 Apr 2021 01:17:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 67E506B0072; Tue, 13 Apr 2021 01:17:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0122.hostedemail.com [216.40.44.122]) by kanga.kvack.org (Postfix) with ESMTP id 456C16B0070 for ; Tue, 13 Apr 2021 01:17:30 -0400 (EDT) Received: from smtpin33.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 053763629 for ; Tue, 13 Apr 2021 05:17:30 +0000 (UTC) X-FDA: 78026185860.33.22E57E6 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf26.hostedemail.com (Postfix) with ESMTP id 9929740002C4 for ; Tue, 13 Apr 2021 05:17:25 +0000 (UTC) Received: by mail-yb1-f202.google.com with SMTP id h69so10507435ybg.10 for ; Mon, 12 Apr 2021 22:17:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=f5i7eoTfHiYgIo4pkqN4NedFYngSEqp7EHb5hZXAit4=; b=KSauzssyNsdmsxVMCwEwpohzc/lijD0sbmKil+kTsB8Ah2uZOn4zd/9iqlpmbQelhz 3tfwoAZip3K+SUKDc7s8hkYFs2GXqK+9kBBK9M4XqnYcQ5+7YmoDWKkddChMmp0hah8s rM+encBY2WptIQ7JnBAcoPmB1DbCHjiG/HhPd0VZ7wQ0npJ7ZUHZw0Hvyy9MGQWvwzLz x5z3svz6mOMiegYP45gSRNqtGGfa3KSdbfWZLN2dLd5n0nIa1fLN0D69A+UDLLyen67k aTFzyjEfxqRROyaNrQm1h5UyvUAOiigfwkOGAld6Jl6ldg+91iEI70FssO93i2fEEXhp E5CA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=f5i7eoTfHiYgIo4pkqN4NedFYngSEqp7EHb5hZXAit4=; b=ECQzBpdQ0JMCoYJeGohiTYdLmFCeZQtTEYcmjPQrxuc+lh1LcplDRW3nIiodsYLEDG tmXXd7+yKzco2u/APQjk8W71KKOqd8HBVirw3JUYWUBGrT2EUpqCDiT0qadpugnYBYAU nUn/3DRi4OdY6HhaJ9J8XX3454yvAZYYK7KacHAUh5yTV42SA0MnOdevW80wgk/6fRI+ uckBWktE4FPnt8vaM7jGyyIHB0l/mMqxjsgJ46oc1pPuZa0Gzb3eVkrVc5NMOrUID3v1 wy/M8O9AOxaZ7gDVkn8rkYdLbJxNJ4ex5yQ8f1FKo5kd8D346nyC5C+n0w6Eka+GRK/J BNDA== X-Gm-Message-State: AOAM533pOhRLeZL2hVwHiOVU/XntqxLSbCixyNGA8In1aVOW/ZWutIf+ /ILs4ympczbiXWj9s3j04YwpCf8q5EYEQzyv0NRC X-Google-Smtp-Source: ABdhPJzLgiuYz8WwePd9AW3GNJYGRAow+Ug/R/wdNAKkO1ZjQuSeYMwOdhR5QIjk2B4Z9Xjj3hlOtFyStmQ7GWtZqbrQ X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:d508:eee5:2d57:3e32]) (user=axelrasmussen job=sendgmr) by 2002:a25:7a45:: with SMTP id v66mr42020225ybc.413.1618291048849; Mon, 12 Apr 2021 22:17:28 -0700 (PDT) Date: Mon, 12 Apr 2021 22:17:14 -0700 In-Reply-To: <20210413051721.2896915-1-axelrasmussen@google.com> Message-Id: <20210413051721.2896915-3-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210413051721.2896915-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.31.1.295.g9ea45b61b8-goog Subject: [PATCH v2 2/9] userfaultfd/shmem: combine shmem_{mcopy_atomic,mfill_zeropage}_pte From: Axel Rasmussen To: Alexander Viro , Andrea Arcangeli , Andrew Morton , Hugh Dickins , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Kravetz , Mike Rapoport , Peter Xu , Shaohua Li , Shuah Khan , Stephen Rothwell , Wang Qing Cc: linux-api@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, Axel Rasmussen , Brian Geffon , "Dr . David Alan Gilbert" , Mina Almasry , Oliver Upton X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 9929740002C4 X-Stat-Signature: k89smwhc7zjq3ed73jw8uj931bkzb7mo Received-SPF: none (flex--axelrasmussen.bounces.google.com>: No applicable sender policy available) receiver=imf26; identity=mailfrom; envelope-from="<3aCl1YA0KCJk3Q7EK3LFNLL7G9HH9E7.5HFEBGNQ-FFDO35D.HK9@flex--axelrasmussen.bounces.google.com>"; helo=mail-yb1-f202.google.com; client-ip=209.85.219.202 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1618291045-770494 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Previously, we did a dance where we had one calling path in userfaultfd.c (mfill_atomic_pte), but then we split it into two in shmem_fs.h (shmem_{mcopy_atomic,mfill_zeropage}_pte), and then rejoined into a single shared function in shmem.c (shmem_mfill_atomic_pte). This is all a bit overly complex. Just call the single combined shmem function directly, allowing us to clean up various branches, boilerplate, etc. While we're touching this function, two other small cleanup changes: - offset is equivalent to pgoff, so we can get rid of offset entirely. - Split two VM_BUG_ON cases into two statements. This means the line number reported when the BUG is hit specifies exactly which condition was true. Reviewed-by: Peter Xu Signed-off-by: Axel Rasmussen Acked-by: Hugh Dickins --- include/linux/shmem_fs.h | 15 +++++------- mm/shmem.c | 52 +++++++++++++--------------------------- mm/userfaultfd.c | 10 +++----- 3 files changed, 25 insertions(+), 52 deletions(-) diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h index d82b6f396588..919e36671fe6 100644 --- a/include/linux/shmem_fs.h +++ b/include/linux/shmem_fs.h @@ -122,21 +122,18 @@ static inline bool shmem_file(struct file *file) extern bool shmem_charge(struct inode *inode, long pages); extern void shmem_uncharge(struct inode *inode, long pages); +#ifdef CONFIG_USERFAULTFD #ifdef CONFIG_SHMEM extern int shmem_mcopy_atomic_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, struct vm_area_struct *dst_vma, unsigned long dst_addr, unsigned long src_addr, + bool zeropage, struct page **pagep); -extern int shmem_mfill_zeropage_pte(struct mm_struct *dst_mm, - pmd_t *dst_pmd, - struct vm_area_struct *dst_vma, - unsigned long dst_addr); -#else +#else /* !CONFIG_SHMEM */ #define shmem_mcopy_atomic_pte(dst_mm, dst_pte, dst_vma, dst_addr, \ - src_addr, pagep) ({ BUG(); 0; }) -#define shmem_mfill_zeropage_pte(dst_mm, dst_pmd, dst_vma, \ - dst_addr) ({ BUG(); 0; }) -#endif + src_addr, zeropage, pagep) ({ BUG(); 0; }) +#endif /* CONFIG_SHMEM */ +#endif /* CONFIG_USERFAULTFD */ #endif diff --git a/mm/shmem.c b/mm/shmem.c index 26c76b13ad23..b72c55aa07fc 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2354,13 +2354,14 @@ static struct inode *shmem_get_inode(struct super_block *sb, const struct inode return inode; } -static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, - pmd_t *dst_pmd, - struct vm_area_struct *dst_vma, - unsigned long dst_addr, - unsigned long src_addr, - bool zeropage, - struct page **pagep) +#ifdef CONFIG_USERFAULTFD +int shmem_mcopy_atomic_pte(struct mm_struct *dst_mm, + pmd_t *dst_pmd, + struct vm_area_struct *dst_vma, + unsigned long dst_addr, + unsigned long src_addr, + bool zeropage, + struct page **pagep) { struct inode *inode = file_inode(dst_vma->vm_file); struct shmem_inode_info *info = SHMEM_I(inode); @@ -2372,7 +2373,7 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, struct page *page; pte_t _dst_pte, *dst_pte; int ret; - pgoff_t offset, max_off; + pgoff_t max_off; ret = -ENOMEM; if (!shmem_inode_acct_block(inode, 1)) @@ -2383,7 +2384,7 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, if (!page) goto out_unacct_blocks; - if (!zeropage) { /* mcopy_atomic */ + if (!zeropage) { /* COPY */ page_kaddr = kmap_atomic(page); ret = copy_from_user(page_kaddr, (const void __user *)src_addr, @@ -2397,7 +2398,7 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, /* don't free the page */ return -ENOENT; } - } else { /* mfill_zeropage_atomic */ + } else { /* ZEROPAGE */ clear_highpage(page); } } else { @@ -2405,15 +2406,15 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, *pagep = NULL; } - VM_BUG_ON(PageLocked(page) || PageSwapBacked(page)); + VM_BUG_ON(PageLocked(page)); + VM_BUG_ON(PageSwapBacked(page)); __SetPageLocked(page); __SetPageSwapBacked(page); __SetPageUptodate(page); ret = -EFAULT; - offset = linear_page_index(dst_vma, dst_addr); max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); - if (unlikely(offset >= max_off)) + if (unlikely(pgoff >= max_off)) goto out_release; ret = shmem_add_to_page_cache(page, mapping, pgoff, NULL, @@ -2439,7 +2440,7 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, ret = -EFAULT; max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); - if (unlikely(offset >= max_off)) + if (unlikely(pgoff >= max_off)) goto out_release_unlock; ret = -EEXIST; @@ -2476,28 +2477,7 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, shmem_inode_unacct_blocks(inode, 1); goto out; } - -int shmem_mcopy_atomic_pte(struct mm_struct *dst_mm, - pmd_t *dst_pmd, - struct vm_area_struct *dst_vma, - unsigned long dst_addr, - unsigned long src_addr, - struct page **pagep) -{ - return shmem_mfill_atomic_pte(dst_mm, dst_pmd, dst_vma, - dst_addr, src_addr, false, pagep); -} - -int shmem_mfill_zeropage_pte(struct mm_struct *dst_mm, - pmd_t *dst_pmd, - struct vm_area_struct *dst_vma, - unsigned long dst_addr) -{ - struct page *page = NULL; - - return shmem_mfill_atomic_pte(dst_mm, dst_pmd, dst_vma, - dst_addr, 0, true, &page); -} +#endif /* CONFIG_USERFAULTFD */ #ifdef CONFIG_TMPFS static const struct inode_operations shmem_symlink_inode_operations; diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index e14b3820c6a8..23fa2583bbd1 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -440,13 +440,9 @@ static __always_inline ssize_t mfill_atomic_pte(struct mm_struct *dst_mm, dst_vma, dst_addr); } else { VM_WARN_ON_ONCE(wp_copy); - if (!zeropage) - err = shmem_mcopy_atomic_pte(dst_mm, dst_pmd, - dst_vma, dst_addr, - src_addr, page); - else - err = shmem_mfill_zeropage_pte(dst_mm, dst_pmd, - dst_vma, dst_addr); + err = shmem_mcopy_atomic_pte(dst_mm, dst_pmd, dst_vma, + dst_addr, src_addr, zeropage, + page); } return err; From patchwork Tue Apr 13 05:17:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 12199307 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BD8E5C43462 for ; Tue, 13 Apr 2021 05:17:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6DB1B6109F for ; Tue, 13 Apr 2021 05:17:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6DB1B6109F Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 45B9F6B0071; Tue, 13 Apr 2021 01:17:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3C1BC6B0072; Tue, 13 Apr 2021 01:17:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1289A6B0073; Tue, 13 Apr 2021 01:17:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0138.hostedemail.com [216.40.44.138]) by kanga.kvack.org (Postfix) with ESMTP id D74CF6B0071 for ; Tue, 13 Apr 2021 01:17:31 -0400 (EDT) Received: from smtpin34.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 9E65E4DA9 for ; Tue, 13 Apr 2021 05:17:31 +0000 (UTC) X-FDA: 78026185902.34.7696D68 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf15.hostedemail.com (Postfix) with ESMTP id D6503A000390 for ; Tue, 13 Apr 2021 05:17:29 +0000 (UTC) Received: by mail-yb1-f202.google.com with SMTP id i2so15206968ybl.21 for ; Mon, 12 Apr 2021 22:17:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=70Kcx1xrATHIJtpEaJi28zNN7Ahs80HU8g90CQ0Hm2k=; b=p6FuHDvYVTK5bDUeP68j1uSS/Re+ZZUG7I7RWQWNy6fgIqGdc+oiUSwx57YKj6p7re ukIoNy2FmOnHeYhwuvPP8W/RrN+g1A+ayDY0f5qnvnavNp+tqlMVmtbZA1fQNp7LK1Pw N2hCk+3pzS2Sn5O+urQ4M67oxJibQDR6BCYzrlMGTEW+IDKAZ0bpME5oHd3cT5Y5hmZk CaH3MsXqRs1EdRECZS1ryXjb//rQxRekMYPW34tsZ7X26sNR4XrY0bp9zW9AS/OPl7RB UHYDLhpBa67Ig+Jx+Cc4l7srWH/InyARcOwfrwuktxMppWCeHPLiCI3t6ijnPmTS6Ksm rq/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=70Kcx1xrATHIJtpEaJi28zNN7Ahs80HU8g90CQ0Hm2k=; b=cMzSmLb7QRsx6jU2bZmcrzsFmFhJa4SHRteM+SdkYZauws33ijXqT10UiAJtKjquil CqaVLnVc43Gn/vd28acLbRPNpmgMcdcJebU+iBchyUDW7oNobmM2JMhkGdYqE++bH3+o DAJj46GOB7Gs3Vwd20goKVyFeTNuQm/LxgraD6go4qs3nVjLEDg2HPjWj/pmxgeAmbTp lWUWU/B8gnv/UouCMhqUcBloY4CYwF6UDpCl/zXlC10hkB5597IpvWwffhYoHyZjjRIG xYoag2S+23pp5oTHmvE6Mi7Ss2xkJbwd2FUAIcx98ENE70Mp48Y1/dZza0xb0R1xRtDn Kngg== X-Gm-Message-State: AOAM533dmZwrjqxuk+WB2pibeuUVp2I8EGZurpMfFaBDaF2ddYZg5uxR 5Q5nw3CGaEE/TL1RotNYts6rv8y6UQckzG5NIq0Z X-Google-Smtp-Source: ABdhPJx3TQQai0X1de1IO+PrirTFU2uw3gtn/o605SZAaB8Oe1cnqX7iNxp+GKo6pSNFTt/+d73yxdn9vQ9lo8anYfqo X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:d508:eee5:2d57:3e32]) (user=axelrasmussen job=sendgmr) by 2002:a5b:44e:: with SMTP id s14mr44730257ybp.11.1618291050652; Mon, 12 Apr 2021 22:17:30 -0700 (PDT) Date: Mon, 12 Apr 2021 22:17:15 -0700 In-Reply-To: <20210413051721.2896915-1-axelrasmussen@google.com> Message-Id: <20210413051721.2896915-4-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210413051721.2896915-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.31.1.295.g9ea45b61b8-goog Subject: [PATCH v2 3/9] userfaultfd/shmem: support minor fault registration for shmem From: Axel Rasmussen To: Alexander Viro , Andrea Arcangeli , Andrew Morton , Hugh Dickins , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Kravetz , Mike Rapoport , Peter Xu , Shaohua Li , Shuah Khan , Stephen Rothwell , Wang Qing Cc: linux-api@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, Axel Rasmussen , Brian Geffon , "Dr . David Alan Gilbert" , Mina Almasry , Oliver Upton X-Stat-Signature: 43hr8tyhjykx4rhg89wnenp9n4gnpmdw X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: D6503A000390 Received-SPF: none (flex--axelrasmussen.bounces.google.com>: No applicable sender policy available) receiver=imf15; identity=mailfrom; envelope-from="<3ail1YA0KCJs5S9GM5NHPNN9IBJJBG9.7JHGDIPS-HHFQ57F.JMB@flex--axelrasmussen.bounces.google.com>"; helo=mail-yb1-f202.google.com; client-ip=209.85.219.202 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1618291049-909577 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This patch allows shmem-backed VMAs to be registered for minor faults. Minor faults are appropriately relayed to userspace in the fault path, for VMAs with the relevant flag. This commit doesn't hook up the UFFDIO_CONTINUE ioctl for shmem-backed minor faults, though, so userspace doesn't yet have a way to resolve such faults. Signed-off-by: Axel Rasmussen Acked-by: Peter Xu --- fs/userfaultfd.c | 6 +++--- include/uapi/linux/userfaultfd.h | 7 ++++++- mm/memory.c | 8 +++++--- mm/shmem.c | 10 +++++++++- 4 files changed, 23 insertions(+), 8 deletions(-) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 14f92285d04f..9f3b8684cf3c 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -1267,8 +1267,7 @@ static inline bool vma_can_userfault(struct vm_area_struct *vma, } if (vm_flags & VM_UFFD_MINOR) { - /* FIXME: Add minor fault interception for shmem. */ - if (!is_vm_hugetlb_page(vma)) + if (!(is_vm_hugetlb_page(vma) || vma_is_shmem(vma))) return false; } @@ -1941,7 +1940,8 @@ static int userfaultfd_api(struct userfaultfd_ctx *ctx, /* report all available features and ioctls to userland */ uffdio_api.features = UFFD_API_FEATURES; #ifndef CONFIG_HAVE_ARCH_USERFAULTFD_MINOR - uffdio_api.features &= ~UFFD_FEATURE_MINOR_HUGETLBFS; + uffdio_api.features &= + ~(UFFD_FEATURE_MINOR_HUGETLBFS | UFFD_FEATURE_MINOR_SHMEM); #endif uffdio_api.ioctls = UFFD_API_IOCTLS; ret = -EFAULT; diff --git a/include/uapi/linux/userfaultfd.h b/include/uapi/linux/userfaultfd.h index bafbeb1a2624..159a74e9564f 100644 --- a/include/uapi/linux/userfaultfd.h +++ b/include/uapi/linux/userfaultfd.h @@ -31,7 +31,8 @@ UFFD_FEATURE_MISSING_SHMEM | \ UFFD_FEATURE_SIGBUS | \ UFFD_FEATURE_THREAD_ID | \ - UFFD_FEATURE_MINOR_HUGETLBFS) + UFFD_FEATURE_MINOR_HUGETLBFS | \ + UFFD_FEATURE_MINOR_SHMEM) #define UFFD_API_IOCTLS \ ((__u64)1 << _UFFDIO_REGISTER | \ (__u64)1 << _UFFDIO_UNREGISTER | \ @@ -185,6 +186,9 @@ struct uffdio_api { * UFFD_FEATURE_MINOR_HUGETLBFS indicates that minor faults * can be intercepted (via REGISTER_MODE_MINOR) for * hugetlbfs-backed pages. + * + * UFFD_FEATURE_MINOR_SHMEM indicates the same support as + * UFFD_FEATURE_MINOR_HUGETLBFS, but for shmem-backed pages instead. */ #define UFFD_FEATURE_PAGEFAULT_FLAG_WP (1<<0) #define UFFD_FEATURE_EVENT_FORK (1<<1) @@ -196,6 +200,7 @@ struct uffdio_api { #define UFFD_FEATURE_SIGBUS (1<<7) #define UFFD_FEATURE_THREAD_ID (1<<8) #define UFFD_FEATURE_MINOR_HUGETLBFS (1<<9) +#define UFFD_FEATURE_MINOR_SHMEM (1<<10) __u64 features; __u64 ioctls; diff --git a/mm/memory.c b/mm/memory.c index 4e358601c5d6..cc71a445c76c 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3972,9 +3972,11 @@ static vm_fault_t do_read_fault(struct vm_fault *vmf) * something). */ if (vma->vm_ops->map_pages && fault_around_bytes >> PAGE_SHIFT > 1) { - ret = do_fault_around(vmf); - if (ret) - return ret; + if (likely(!userfaultfd_minor(vmf->vma))) { + ret = do_fault_around(vmf); + if (ret) + return ret; + } } ret = __do_fault(vmf); diff --git a/mm/shmem.c b/mm/shmem.c index b72c55aa07fc..3f48cb5e8404 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1785,7 +1785,7 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index, * vm. If we swap it in we mark it dirty since we also free the swap * entry since a page cannot live in both the swap and page cache. * - * vmf and fault_type are only supplied by shmem_fault: + * vma, vmf, and fault_type are only supplied by shmem_fault: * otherwise they are NULL. */ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, @@ -1820,6 +1820,14 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, page = pagecache_get_page(mapping, index, FGP_ENTRY | FGP_HEAD | FGP_LOCK, 0); + + if (page && vma && userfaultfd_minor(vma)) { + unlock_page(page); + put_page(page); + *fault_type = handle_userfault(vmf, VM_UFFD_MINOR); + return 0; + } + if (xa_is_value(page)) { error = shmem_swapin_page(inode, index, &page, sgp, gfp, vma, fault_type); From patchwork Tue Apr 13 05:17:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 12199309 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C569C433B4 for ; Tue, 13 Apr 2021 05:17:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9BF946109F for ; Tue, 13 Apr 2021 05:17:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9BF946109F Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1C17A6B0072; Tue, 13 Apr 2021 01:17:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 172526B0073; Tue, 13 Apr 2021 01:17:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D59D66B0074; Tue, 13 Apr 2021 01:17:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0136.hostedemail.com [216.40.44.136]) by kanga.kvack.org (Postfix) with ESMTP id B23B66B0072 for ; Tue, 13 Apr 2021 01:17:33 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 74951180ACF75 for ; Tue, 13 Apr 2021 05:17:33 +0000 (UTC) X-FDA: 78026185986.20.F1BE8FE Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf11.hostedemail.com (Postfix) with ESMTP id A0D9D2000263 for ; Tue, 13 Apr 2021 05:17:24 +0000 (UTC) Received: by mail-yb1-f202.google.com with SMTP id i6so15104311ybk.2 for ; Mon, 12 Apr 2021 22:17:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=B1QrZUXz1mfO04UooSsSv29TzQGBKQzwUKcmaQ2j5PI=; b=hfV6IheasjNIaKkv59l94UHD0BxrYkH4wC9MqXkTAmf9Prt91NSTI6Ek88oE/tOr7o 9rjWTG09zmEzfOI5vlZ++KlAqsPeuQwmV3lsl62Qk3C3DMXaLLhKvH/nQktEWyMsyXBc anC8ntloN8Qp3yEVbbp44/hCzQvI7sANmw0W0VzJkZTJxeappjvmCo+yXtqz6Bzvhq/y e7M+TWQOTOSEuBBES3SO7Tb0nbdcMRE3+cEyFzx1CaXCAyiSPd8ZcsHeMCtQB3REJ+YH YM2wTZEyQ1JM2VwOGWSPpV8gYj9yPhHGbUZMRLIApzoFodcEmeU2ZG305YV61F8fhvYc cFTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=B1QrZUXz1mfO04UooSsSv29TzQGBKQzwUKcmaQ2j5PI=; b=nYS9bFUhLJ2bGl3VckkVbV/R6uaRpNb/4uiODsM4RdcLyxFfAHHYE4MGL/SjMA96CE NA84hMdvgiI3ZgMdjIO82A8XGVFOHNJ/JPWVOLwYvEs/Gr9HlyWMBMNje+SgrgI6nMVW w24oZ0+G/si+4bjUXr0wfd0xhk/avyR+rshWHsJJtEcDq6Z0E9xnEQWqfe5m0hsD7+Cv 4O1CtNoEpCeWe1MQpnCptCwM4rNTBjZVVbYYfGnUsBEwCoLGgA/dTROO7Gf84IvvRePB SB99I/CQqJ9iYozoFNiuMnu7vlX+ExM/XpV6h0WCyiPbjzD1gsC6HJc67pCb0BaUpUVr iPfw== X-Gm-Message-State: AOAM532thHQqmll0nWjmOEr2HkfixhunNNn6ioJ7h02ih1jETmkjhQMD sghtLe8VrZSQNkEg+fYR3uWJvj5/G+BJfp8LAWt0 X-Google-Smtp-Source: ABdhPJw4hFsV7N4+p/cOcFhlLDsbYBsKLJWzo69/Ba+wh/Z6KTyIXSbtjTGYrxybwDkR5f5cDdPivI8m3GUIZGzNCi33 X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:d508:eee5:2d57:3e32]) (user=axelrasmussen job=sendgmr) by 2002:a25:c89:: with SMTP id 131mr4103685ybm.99.1618291052377; Mon, 12 Apr 2021 22:17:32 -0700 (PDT) Date: Mon, 12 Apr 2021 22:17:16 -0700 In-Reply-To: <20210413051721.2896915-1-axelrasmussen@google.com> Message-Id: <20210413051721.2896915-5-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210413051721.2896915-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.31.1.295.g9ea45b61b8-goog Subject: [PATCH v2 4/9] userfaultfd/shmem: support UFFDIO_CONTINUE for shmem From: Axel Rasmussen To: Alexander Viro , Andrea Arcangeli , Andrew Morton , Hugh Dickins , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Kravetz , Mike Rapoport , Peter Xu , Shaohua Li , Shuah Khan , Stephen Rothwell , Wang Qing Cc: linux-api@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, Axel Rasmussen , Brian Geffon , "Dr . David Alan Gilbert" , Mina Almasry , Oliver Upton X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: A0D9D2000263 X-Stat-Signature: 11anjjracs44iox649431zpmxp4jdgih Received-SPF: none (flex--axelrasmussen.bounces.google.com>: No applicable sender policy available) receiver=imf11; identity=mailfrom; envelope-from="<3bCl1YA0KCJ07UBIO7PJRPPBKDLLDIB.9LJIFKRU-JJHS79H.LOD@flex--axelrasmussen.bounces.google.com>"; helo=mail-yb1-f202.google.com; client-ip=209.85.219.202 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1618291044-129542 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: With this change, userspace can resolve a minor fault within a shmem-backed area with a UFFDIO_CONTINUE ioctl. The semantics for this match those for hugetlbfs - we look up the existing page in the page cache, and install PTEs for it. This commit introduces a new helper: mcopy_atomic_install_ptes. Why handle UFFDIO_CONTINUE for shmem in mm/userfaultfd.c, instead of in shmem.c? The existing userfault implementation only relies on shmem.c for VM_SHARED VMAs. However, minor fault handling / CONTINUE work just fine for !VM_SHARED VMAs as well. We'd prefer to handle CONTINUE for shmem in one place, regardless of shared/private (to reduce code duplication). Why add a new mcopy_atomic_install_ptes helper? A problem we have with continue is that shmem_mcopy_atomic_pte() and mcopy_atomic_pte() are *close* to what we want, but not exactly. We do want to setup the PTEs in a CONTINUE operation, but we don't want to e.g. allocate a new page, charge it (e.g. to the shmem inode), manipulate various flags, etc. Also we have the problem stated above: shmem_mcopy_atomic_pte() and mcopy_atomic_pte() both handle one-half of the problem (shared / private) continue cares about. So, introduce mcontinue_atomic_pte(), to handle all of the shmem continue cases. Introduce the helper so it doesn't duplicate code with mcopy_atomic_pte(). In a future commit, shmem_mcopy_atomic_pte() will also be modified to use this new helper. However, since this is a bigger refactor, it seems most clear to do it as a separate change. Signed-off-by: Axel Rasmussen --- mm/userfaultfd.c | 176 +++++++++++++++++++++++++++++++++++------------ 1 file changed, 131 insertions(+), 45 deletions(-) diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 23fa2583bbd1..8df0438f5d6a 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -48,6 +48,87 @@ struct vm_area_struct *find_dst_vma(struct mm_struct *dst_mm, return dst_vma; } +/* + * Install PTEs, to map dst_addr (within dst_vma) to page. + * + * This function handles MCOPY_ATOMIC_CONTINUE (which is always file-backed), + * whether or not dst_vma is VM_SHARED. It also handles the more general + * MCOPY_ATOMIC_NORMAL case, when dst_vma is *not* VM_SHARED (it may be file + * backed, or not). + * + * Note that MCOPY_ATOMIC_NORMAL for a VM_SHARED dst_vma is handled by + * shmem_mcopy_atomic_pte instead. + */ +static int mcopy_atomic_install_ptes(struct mm_struct *dst_mm, pmd_t *dst_pmd, + struct vm_area_struct *dst_vma, + unsigned long dst_addr, struct page *page, + bool newly_allocated, bool wp_copy) +{ + int ret; + pte_t _dst_pte, *dst_pte; + int writable; + bool vm_shared = dst_vma->vm_flags & VM_SHARED; + spinlock_t *ptl; + struct inode *inode; + pgoff_t offset, max_off; + + _dst_pte = mk_pte(page, dst_vma->vm_page_prot); + writable = dst_vma->vm_flags & VM_WRITE; + /* For private, non-anon we need CoW (don't write to page cache!) */ + if (!vma_is_anonymous(dst_vma) && !vm_shared) + writable = 0; + + if (writable || vma_is_anonymous(dst_vma)) + _dst_pte = pte_mkdirty(_dst_pte); + if (writable) { + if (wp_copy) + _dst_pte = pte_mkuffd_wp(_dst_pte); + else + _dst_pte = pte_mkwrite(_dst_pte); + } else if (vm_shared) { + /* + * Since we didn't pte_mkdirty(), mark the page dirty or it + * could be freed from under us. We could do this + * unconditionally, but doing it only if !writable is faster. + */ + set_page_dirty(page); + } + + dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl); + + if (vma_is_shmem(dst_vma)) { + /* serialize against truncate with the page table lock */ + inode = dst_vma->vm_file->f_inode; + offset = linear_page_index(dst_vma, dst_addr); + max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); + ret = -EFAULT; + if (unlikely(offset >= max_off)) + goto out_unlock; + } + + ret = -EEXIST; + if (!pte_none(*dst_pte)) + goto out_unlock; + + inc_mm_counter(dst_mm, mm_counter(page)); + if (vma_is_shmem(dst_vma)) + page_add_file_rmap(page, false); + else + page_add_new_anon_rmap(page, dst_vma, dst_addr, false); + + if (newly_allocated) + lru_cache_add_inactive_or_unevictable(page, dst_vma); + + set_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte); + + /* No need to invalidate - it was non-present before */ + update_mmu_cache(dst_vma, dst_addr, dst_pte); + ret = 0; +out_unlock: + pte_unmap_unlock(dst_pte, ptl); + return ret; +} + static int mcopy_atomic_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, struct vm_area_struct *dst_vma, @@ -56,13 +137,9 @@ static int mcopy_atomic_pte(struct mm_struct *dst_mm, struct page **pagep, bool wp_copy) { - pte_t _dst_pte, *dst_pte; - spinlock_t *ptl; void *page_kaddr; int ret; struct page *page; - pgoff_t offset, max_off; - struct inode *inode; if (!*pagep) { ret = -ENOMEM; @@ -99,43 +176,12 @@ static int mcopy_atomic_pte(struct mm_struct *dst_mm, if (mem_cgroup_charge(page, dst_mm, GFP_KERNEL)) goto out_release; - _dst_pte = pte_mkdirty(mk_pte(page, dst_vma->vm_page_prot)); - if (dst_vma->vm_flags & VM_WRITE) { - if (wp_copy) - _dst_pte = pte_mkuffd_wp(_dst_pte); - else - _dst_pte = pte_mkwrite(_dst_pte); - } - - dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl); - if (dst_vma->vm_file) { - /* the shmem MAP_PRIVATE case requires checking the i_size */ - inode = dst_vma->vm_file->f_inode; - offset = linear_page_index(dst_vma, dst_addr); - max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); - ret = -EFAULT; - if (unlikely(offset >= max_off)) - goto out_release_uncharge_unlock; - } - ret = -EEXIST; - if (!pte_none(*dst_pte)) - goto out_release_uncharge_unlock; - - inc_mm_counter(dst_mm, MM_ANONPAGES); - page_add_new_anon_rmap(page, dst_vma, dst_addr, false); - lru_cache_add_inactive_or_unevictable(page, dst_vma); - - set_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte); - - /* No need to invalidate - it was non-present before */ - update_mmu_cache(dst_vma, dst_addr, dst_pte); - - pte_unmap_unlock(dst_pte, ptl); - ret = 0; + ret = mcopy_atomic_install_ptes(dst_mm, dst_pmd, dst_vma, dst_addr, + page, true, wp_copy); + if (ret) + goto out_release; out: return ret; -out_release_uncharge_unlock: - pte_unmap_unlock(dst_pte, ptl); out_release: put_page(page); goto out; @@ -176,6 +222,41 @@ static int mfill_zeropage_pte(struct mm_struct *dst_mm, return ret; } +/* Handles UFFDIO_CONTINUE for all shmem VMAs (shared or private). */ +static int mcontinue_atomic_pte(struct mm_struct *dst_mm, + pmd_t *dst_pmd, + struct vm_area_struct *dst_vma, + unsigned long dst_addr, + bool wp_copy) +{ + struct inode *inode = file_inode(dst_vma->vm_file); + pgoff_t pgoff = linear_page_index(dst_vma, dst_addr); + struct page *page; + int ret; + + ret = shmem_getpage(inode, pgoff, &page, SGP_READ); + if (ret) + goto out; + if (!page) { + ret = -EFAULT; + goto out; + } + + ret = mcopy_atomic_install_ptes(dst_mm, dst_pmd, dst_vma, dst_addr, + page, false, wp_copy); + if (ret) + goto out_release; + + unlock_page(page); + ret = 0; +out: + return ret; +out_release: + unlock_page(page); + put_page(page); + goto out; +} + static pmd_t *mm_alloc_pmd(struct mm_struct *mm, unsigned long address) { pgd_t *pgd; @@ -415,11 +496,16 @@ static __always_inline ssize_t mfill_atomic_pte(struct mm_struct *dst_mm, unsigned long dst_addr, unsigned long src_addr, struct page **page, - bool zeropage, + enum mcopy_atomic_mode mode, bool wp_copy) { ssize_t err; + if (mode == MCOPY_ATOMIC_CONTINUE) { + return mcontinue_atomic_pte(dst_mm, dst_pmd, dst_vma, dst_addr, + wp_copy); + } + /* * The normal page fault path for a shmem will invoke the * fault, fill the hole in the file and COW it right away. The @@ -431,7 +517,7 @@ static __always_inline ssize_t mfill_atomic_pte(struct mm_struct *dst_mm, * and not in the radix tree. */ if (!(dst_vma->vm_flags & VM_SHARED)) { - if (!zeropage) + if (mode == MCOPY_ATOMIC_NORMAL) err = mcopy_atomic_pte(dst_mm, dst_pmd, dst_vma, dst_addr, src_addr, page, wp_copy); @@ -441,7 +527,8 @@ static __always_inline ssize_t mfill_atomic_pte(struct mm_struct *dst_mm, } else { VM_WARN_ON_ONCE(wp_copy); err = shmem_mcopy_atomic_pte(dst_mm, dst_pmd, dst_vma, - dst_addr, src_addr, zeropage, + dst_addr, src_addr, + mode != MCOPY_ATOMIC_NORMAL, page); } @@ -463,7 +550,6 @@ static __always_inline ssize_t __mcopy_atomic(struct mm_struct *dst_mm, long copied; struct page *page; bool wp_copy; - bool zeropage = (mcopy_mode == MCOPY_ATOMIC_ZEROPAGE); /* * Sanitize the command parameters: @@ -526,7 +612,7 @@ static __always_inline ssize_t __mcopy_atomic(struct mm_struct *dst_mm, if (!vma_is_anonymous(dst_vma) && !vma_is_shmem(dst_vma)) goto out_unlock; - if (mcopy_mode == MCOPY_ATOMIC_CONTINUE) + if (!vma_is_shmem(dst_vma) && mcopy_mode == MCOPY_ATOMIC_CONTINUE) goto out_unlock; /* @@ -574,7 +660,7 @@ static __always_inline ssize_t __mcopy_atomic(struct mm_struct *dst_mm, BUG_ON(pmd_trans_huge(*dst_pmd)); err = mfill_atomic_pte(dst_mm, dst_pmd, dst_vma, dst_addr, - src_addr, &page, zeropage, wp_copy); + src_addr, &page, mcopy_mode, wp_copy); cond_resched(); if (unlikely(err == -ENOENT)) { From patchwork Tue Apr 13 05:17:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 12199311 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B685C43461 for ; Tue, 13 Apr 2021 05:17:38 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EC719613AB for ; Tue, 13 Apr 2021 05:17:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EC719613AB Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D32E46B0073; Tue, 13 Apr 2021 01:17:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CBDC46B0074; Tue, 13 Apr 2021 01:17:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A75426B0075; Tue, 13 Apr 2021 01:17:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0098.hostedemail.com [216.40.44.98]) by kanga.kvack.org (Postfix) with ESMTP id 8AA486B0073 for ; Tue, 13 Apr 2021 01:17:35 -0400 (EDT) Received: from smtpin37.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 50BC1180ACF75 for ; Tue, 13 Apr 2021 05:17:35 +0000 (UTC) X-FDA: 78026186070.37.9CAD5F7 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by imf11.hostedemail.com (Postfix) with ESMTP id 83C762000263 for ; Tue, 13 Apr 2021 05:17:26 +0000 (UTC) Received: by mail-yb1-f201.google.com with SMTP id v2so12900051ybc.17 for ; Mon, 12 Apr 2021 22:17:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ArLA7VCb9t489S2A8vdExWcG4gs0ZfEI51msDdZxM/g=; b=UgWXUdE3GefvJEVOqa3TKqXsCgsdMGB3WMQdPFhHnZNIV4qJbriENFW22vE65M82He mBTniwEzW2nTjKQg/dmE56u1lM/vARivE/L7BvQtuFTFR6uaTv736jDv757nPHvgeDe3 7SHXcFpeL/F9bay6OBw69RxJIJ3xPTBds5lbPHPa8aR5ovSQap6xtDaji4xiyQW4nNmC jTTCQK0OCZ39nqsb+PtWmiZnS6stOKExryQwdkAeqMdCgV8c+c+R2j/inFG+grv1Po0U KcUJa0HXgn9nr9X1nmqdc6VHP5spZiWd+jDktD8UXCBRYU2idpI5VJvnKNg4tQHzryLI 6ZPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ArLA7VCb9t489S2A8vdExWcG4gs0ZfEI51msDdZxM/g=; b=ThI1S4yEC9MOXvj7IH9IHyC+ZtDFUREs+6UiwL5DcYzCxA2jqzg3+UopA5Mu6Lv3mo soalkvdmJhhL+h4KILMUUYoO2Bhdv3zvx1K8PcGftJp0odsGogKsCdrmCxjbyq6SWjYd lulMOrExulm4iCUkrSq3DqIFdCIDrZw09fu2mNDuTP51J/9Zm/umBb3hnDhNrKg+DDiI T1DKVJv112WZATsXMyiUaFAXVTAXmb+JgdvWgDFUzN/zcpKgBamT8taaUW7KcEU6EwTQ UrACzvZpuGTyENsF/etQQtUI51nG4YZyT7C8yyTG7xRbXjbqFbsp+WlFLuBTCFkk0CTI MTSA== X-Gm-Message-State: AOAM530v5bTiPDjUaetiMmCMlhS1cDhwjjSnsF7iwz9ehPsbLCdmU+rw yRScVolsyRjru+C1VtBitkCLB2LQ5YbYtI0Dycu9 X-Google-Smtp-Source: ABdhPJxlREUlX0akcn/j0mJkL3ThP0Bbbzj8wmNGOxirFNN4Dw+oA8j/af4iTNqKjttH8kXcqbgVCd/MvGWVwUdKrECm X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:d508:eee5:2d57:3e32]) (user=axelrasmussen job=sendgmr) by 2002:a25:e6c7:: with SMTP id d190mr10780764ybh.394.1618291054231; Mon, 12 Apr 2021 22:17:34 -0700 (PDT) Date: Mon, 12 Apr 2021 22:17:17 -0700 In-Reply-To: <20210413051721.2896915-1-axelrasmussen@google.com> Message-Id: <20210413051721.2896915-6-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210413051721.2896915-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.31.1.295.g9ea45b61b8-goog Subject: [PATCH v2 5/9] userfaultfd/selftests: use memfd_create for shmem test type From: Axel Rasmussen To: Alexander Viro , Andrea Arcangeli , Andrew Morton , Hugh Dickins , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Kravetz , Mike Rapoport , Peter Xu , Shaohua Li , Shuah Khan , Stephen Rothwell , Wang Qing Cc: linux-api@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, Axel Rasmussen , Brian Geffon , "Dr . David Alan Gilbert" , Mina Almasry , Oliver Upton X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 83C762000263 X-Stat-Signature: onq86qarp7ggusxq66pcyap87njhcjbs Received-SPF: none (flex--axelrasmussen.bounces.google.com>: No applicable sender policy available) receiver=imf11; identity=mailfrom; envelope-from="<3bil1YA0KCJ89WDKQ9RLTRRDMFNNFKD.BNLKHMTW-LLJU9BJ.NQF@flex--axelrasmussen.bounces.google.com>"; helo=mail-yb1-f201.google.com; client-ip=209.85.219.201 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1618291046-431289 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is a preparatory commit. In the future, we want to be able to setup alias mappings for area_src and area_dst in the shmem test, like we do in the hugetlb_shared test. With a VMA obtained via mmap(MAP_ANONYMOUS | MAP_SHARED), it isn't clear how to do this. So, mmap() with an fd, so we can create alias mappings. Use memfd_create instead of actually passing in a tmpfs path like hugetlb does, since it's more convenient / simpler to run, and works just as well. Future commits will: 1. Setup the alias mappings. 2. Extend our tests to actually take advantage of this, to test new userfaultfd behavior being introduced in this series. Also, a small fix in the area we're changing: when the hugetlb setup fails in main(), pass in the right argv[] so we actually print out the hugetlb file path. Signed-off-by: Axel Rasmussen Reviewed-by: Peter Xu --- tools/testing/selftests/vm/userfaultfd.c | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/vm/userfaultfd.c b/tools/testing/selftests/vm/userfaultfd.c index 6339aeaeeff8..fc40831f818f 100644 --- a/tools/testing/selftests/vm/userfaultfd.c +++ b/tools/testing/selftests/vm/userfaultfd.c @@ -85,6 +85,7 @@ static bool test_uffdio_wp = false; static bool test_uffdio_minor = false; static bool map_shared; +static int shm_fd; static int huge_fd; static char *huge_fd_off0; static unsigned long long *count_verify; @@ -277,8 +278,11 @@ static void shmem_release_pages(char *rel_area) static void shmem_allocate_area(void **alloc_area) { + unsigned long offset = + alloc_area == (void **)&area_src ? 0 : nr_pages * page_size; + *alloc_area = mmap(NULL, nr_pages * page_size, PROT_READ | PROT_WRITE, - MAP_ANONYMOUS | MAP_SHARED, -1, 0); + MAP_SHARED, shm_fd, offset); if (*alloc_area == MAP_FAILED) err("mmap of memfd failed"); } @@ -1448,6 +1452,16 @@ int main(int argc, char **argv) err("Open of %s failed", argv[4]); if (ftruncate(huge_fd, 0)) err("ftruncate %s to size 0 failed", argv[4]); + } else if (test_type == TEST_SHMEM) { + shm_fd = memfd_create(argv[0], 0); + if (shm_fd < 0) + err("memfd_create"); + if (ftruncate(shm_fd, nr_pages * page_size * 2)) + err("ftruncate"); + if (fallocate(shm_fd, + FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, 0, + nr_pages * page_size * 2)) + err("fallocate"); } printf("nr_pages: %lu, nr_pages_per_cpu: %lu\n", nr_pages, nr_pages_per_cpu); From patchwork Tue Apr 13 05:17:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 12199313 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A54DEC433B4 for ; Tue, 13 Apr 2021 05:17:40 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 35B9F6109F for ; Tue, 13 Apr 2021 05:17:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 35B9F6109F Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B5F516B0074; Tue, 13 Apr 2021 01:17:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A99936B0075; Tue, 13 Apr 2021 01:17:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7DBA66B0078; Tue, 13 Apr 2021 01:17:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0130.hostedemail.com [216.40.44.130]) by kanga.kvack.org (Postfix) with ESMTP id 5CB406B0074 for ; Tue, 13 Apr 2021 01:17:37 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 0D5741807DC41 for ; Tue, 13 Apr 2021 05:17:37 +0000 (UTC) X-FDA: 78026186154.14.E3A77F5 Received: from mail-qv1-f74.google.com (mail-qv1-f74.google.com [209.85.219.74]) by imf06.hostedemail.com (Postfix) with ESMTP id 7844AC0001FA for ; Tue, 13 Apr 2021 05:17:38 +0000 (UTC) Received: by mail-qv1-f74.google.com with SMTP id b13so7234159qvz.18 for ; Mon, 12 Apr 2021 22:17:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=s2g9RRcT2CZqZhffih5hZD9fStRZlDgnV7MgQKwbAwI=; b=coyrsbU8xiEUY7cpu36WYT9x7Fa0axW22svWvL7n5nF1c6ADdd6k/Lyx3E+NBrPBQW VeipYvBh20HBak55G747CI4QraHrM5sP2UK+55spfq+enOAQADrPKnGvKiEAiG9EOWsc X+zPS7ACj9F9bIkIvAMtEBuLHV3heIbmpeUzRjR4G60Nq3mkE+k0QGi+nMBBMluk1O63 3gLFfVdQvJ7HBhSZc5YuN0sbjQO18VVA81elmDmwcOU8PLGVdl9EXeTaCXy9bMT8qzl4 8/yILRcNVzoYpWpCq7rWHZ0cPgtGilbL1wm//boL85BFYbhoxd7IYdxE7mZ6zkVGwlbL GFeA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=s2g9RRcT2CZqZhffih5hZD9fStRZlDgnV7MgQKwbAwI=; b=k11Q27nDU0mOZxM5zSwZD3Lc45gh/YCXB+rxY7rnxV56rz3NnQMFvo4x3p8LqcSM8Y OC4yK2iYh5/U1Vzi9GuybEjlD32XWZYQn8oS6xprVADuhvpKETQ+zvThMO2N4YB2mknv YHL4502j6/O8esGXgYCtRMva3YilF22QdDCE2zhPY2afWv7spftMwHN8+cKZFzETIKm5 eVkq4JiJFh1Mj9mFU+94BTvJtfo6o8i8XFkcLH4+jKV0CKBhLcNbem38W8PT2eYHe52J sALzX1Kcg1fHvloWUwCEfeMyk+jiAYSX2fcEcoibsGc7OeZSpn8BeNAvYxnuEUJ62Gyb obng== X-Gm-Message-State: AOAM531kAFdSgtFbeQj1Jk4uGCdzS71r9wjqjkoSgl3sYFT9J0GgjLCb LS+DoS/cwhyGQphVipxlli09+qCzMxu9/LbsPW26 X-Google-Smtp-Source: ABdhPJxyr4ePMnTKYlm+ORxtBvKMef9bgZGyGXdFEgIRmGOpc7cwskM194jb6eswjWbyti669WfTVAyHWhRx5C4KA2ze X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:d508:eee5:2d57:3e32]) (user=axelrasmussen job=sendgmr) by 2002:a0c:c3cd:: with SMTP id p13mr31643953qvi.4.1618291055996; Mon, 12 Apr 2021 22:17:35 -0700 (PDT) Date: Mon, 12 Apr 2021 22:17:18 -0700 In-Reply-To: <20210413051721.2896915-1-axelrasmussen@google.com> Message-Id: <20210413051721.2896915-7-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210413051721.2896915-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.31.1.295.g9ea45b61b8-goog Subject: [PATCH v2 6/9] userfaultfd/selftests: create alias mappings in the shmem test From: Axel Rasmussen To: Alexander Viro , Andrea Arcangeli , Andrew Morton , Hugh Dickins , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Kravetz , Mike Rapoport , Peter Xu , Shaohua Li , Shuah Khan , Stephen Rothwell , Wang Qing Cc: linux-api@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, Axel Rasmussen , Brian Geffon , "Dr . David Alan Gilbert" , Mina Almasry , Oliver Upton X-Rspamd-Queue-Id: 7844AC0001FA X-Stat-Signature: j8mdzywh8fywrdyw73y5gxds1ydk4fr5 X-Rspamd-Server: rspam02 Received-SPF: none (flex--axelrasmussen.bounces.google.com>: No applicable sender policy available) receiver=imf06; identity=mailfrom; envelope-from="<3byl1YA0KCKAAXELRASMUSSENGOOGLE.COMLINUX-MMKVACK.ORG@flex--axelrasmussen.bounces.google.com>"; helo=mail-qv1-f74.google.com; client-ip=209.85.219.74 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1618291058-302781 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Previously, we just allocated two shm areas: area_src and area_dst. With this commit, change this so we also allocate area_src_alias, and area_dst_alias. area_*_alias and area_* (respectively) point to the same underlying physical pages, but are different VMAs. In a future commit in this series, we'll leverage this setup to exercise minor fault handling support for shmem, just like we do in the hugetlb_shared test. Signed-off-by: Axel Rasmussen Reviewed-by: Peter Xu --- tools/testing/selftests/vm/userfaultfd.c | 22 +++++++++++++++++++--- 1 file changed, 19 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/vm/userfaultfd.c b/tools/testing/selftests/vm/userfaultfd.c index fc40831f818f..1f65c4ab7994 100644 --- a/tools/testing/selftests/vm/userfaultfd.c +++ b/tools/testing/selftests/vm/userfaultfd.c @@ -278,13 +278,29 @@ static void shmem_release_pages(char *rel_area) static void shmem_allocate_area(void **alloc_area) { - unsigned long offset = - alloc_area == (void **)&area_src ? 0 : nr_pages * page_size; + void *area_alias = NULL; + bool is_src = alloc_area == (void **)&area_src; + unsigned long offset = is_src ? 0 : nr_pages * page_size; *alloc_area = mmap(NULL, nr_pages * page_size, PROT_READ | PROT_WRITE, MAP_SHARED, shm_fd, offset); if (*alloc_area == MAP_FAILED) err("mmap of memfd failed"); + + area_alias = mmap(NULL, nr_pages * page_size, PROT_READ | PROT_WRITE, + MAP_SHARED, shm_fd, offset); + if (area_alias == MAP_FAILED) + err("mmap of memfd alias failed"); + + if (is_src) + area_src_alias = area_alias; + else + area_dst_alias = area_alias; +} + +static void shmem_alias_mapping(__u64 *start, size_t len, unsigned long offset) +{ + *start = (unsigned long)area_dst_alias + offset; } struct uffd_test_ops { @@ -314,7 +330,7 @@ static struct uffd_test_ops shmem_uffd_test_ops = { .expected_ioctls = SHMEM_EXPECTED_IOCTLS, .allocate_area = shmem_allocate_area, .release_pages = shmem_release_pages, - .alias_mapping = noop_alias_mapping, + .alias_mapping = shmem_alias_mapping, }; static struct uffd_test_ops hugetlb_uffd_test_ops = { From patchwork Tue Apr 13 05:17:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 12199315 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C0AEBC433ED for ; Tue, 13 Apr 2021 05:17:42 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 67A496109F for ; Tue, 13 Apr 2021 05:17:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 67A496109F Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 62EDC6B0075; Tue, 13 Apr 2021 01:17:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5B70C6B0078; Tue, 13 Apr 2021 01:17:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 458EC8D0001; Tue, 13 Apr 2021 01:17:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0213.hostedemail.com [216.40.44.213]) by kanga.kvack.org (Postfix) with ESMTP id 256386B0075 for ; Tue, 13 Apr 2021 01:17:39 -0400 (EDT) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id D5D091801FA53 for ; Tue, 13 Apr 2021 05:17:38 +0000 (UTC) X-FDA: 78026186196.10.A7F94FE Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf12.hostedemail.com (Postfix) with ESMTP id 6E1D912E for ; Tue, 13 Apr 2021 05:17:33 +0000 (UTC) Received: by mail-yb1-f202.google.com with SMTP id k199so12375653ybf.0 for ; Mon, 12 Apr 2021 22:17:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=LZq0Iuiqqfue6YSVowcg6Ya4T6hv7PLt3ULuSvfGEyI=; b=wDqKVZJ1/41+q/ieGohj7aXqAuz+iDZzQE3Wjorge99oFpALxjNRjCB978tUwvoYRY HCBLNnU9EShSFQU0wyQcTSkv4WvjiLVQ0nhGjIIEpgj7sBo7OwVMN0jE+mCFHukqdt65 Nt/qBgyEnkiO2oW+NoZnLb3TLKIJOvzwJnNQi6oLiOZXLkbdc0Gv8j2oKUAHgHhai+HJ tGrhFjlSSPTab0Bsp6+/a82oPzIVrMre3MuCbbk9qDIlcLMddcUYhxZvf083k9ACSKaD EWjGl8bZQq+NTGu1OdX3BQHAMyzkZwmVP7EN15A9XpkNruk4ZJyzpEuNgfkffYv0YUTT E22g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=LZq0Iuiqqfue6YSVowcg6Ya4T6hv7PLt3ULuSvfGEyI=; b=RUcLnjmiORCccY6RfRQOffuUPqzw2KrU7Z7S0wKFKL2iWQTU9w+h20AgHcvSZWJgO+ Ka+dEZUK3jEF3Orr6V+1nXqozbvNCadcKw2gV7GsZTaNuz3JX8Osmrq9xfLHHbuFpgVk pw2ufZSximDSpgziDB4beoEFS8rR+78JM/8a8nJsdQvFCUWMzqbIJad58m4cWYg7ilm2 AhVqLq6UPIwCcyWDwaP+tFrE5gwFU4k0mYyLBhrWFDjJS4XXKmrHDLH085OBHgMwv3Ln Dh9/0QR060JWR6PXSHx0xujkxpa61SWjWubzHhh2mLZc2tt2cNKHIVi28GyVpHedQeAG 2bgA== X-Gm-Message-State: AOAM531IbIBMM93Io88E3IdQ61+5wlYKQICRmasjqThTUOc7HaCciGiX Pso3pPgrZSt+t8okrAtRm77e3TfJpjl5jPsoUuaz X-Google-Smtp-Source: ABdhPJxgXy8eiMBX4maHs+7RzHF+oLynhJ+iU9/EUh7hnQWEnFvufUQODoCpW80w50bnqk1JOG7x9m4+/i1bop2HN7NE X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:d508:eee5:2d57:3e32]) (user=axelrasmussen job=sendgmr) by 2002:a25:7648:: with SMTP id r69mr28006023ybc.499.1618291057693; Mon, 12 Apr 2021 22:17:37 -0700 (PDT) Date: Mon, 12 Apr 2021 22:17:19 -0700 In-Reply-To: <20210413051721.2896915-1-axelrasmussen@google.com> Message-Id: <20210413051721.2896915-8-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210413051721.2896915-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.31.1.295.g9ea45b61b8-goog Subject: [PATCH v2 7/9] userfaultfd/selftests: reinitialize test context in each test From: Axel Rasmussen To: Alexander Viro , Andrea Arcangeli , Andrew Morton , Hugh Dickins , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Kravetz , Mike Rapoport , Peter Xu , Shaohua Li , Shuah Khan , Stephen Rothwell , Wang Qing Cc: linux-api@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, Axel Rasmussen , Brian Geffon , "Dr . David Alan Gilbert" , Mina Almasry , Oliver Upton X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 6E1D912E X-Stat-Signature: p77op6dgi9fxkiypo4y6j4yghgq3eyts Received-SPF: none (flex--axelrasmussen.bounces.google.com>: No applicable sender policy available) receiver=imf12; identity=mailfrom; envelope-from="<3cSl1YA0KCKICZGNTCUOWUUGPIQQING.EQONKPWZ-OOMXCEM.QTI@flex--axelrasmussen.bounces.google.com>"; helo=mail-yb1-f202.google.com; client-ip=209.85.219.202 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1618291053-210936 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently, the context (fds, mmap-ed areas, etc.) are global. Each test mutates this state in some way, in some cases really "clobbering it" (e.g., the events test mremap-ing area_dst over the top of area_src, or the minor faults tests overwriting the count_verify values in the test areas). We run the tests in a particular order, each test is careful to make the right assumptions about its starting state, etc. But, this is fragile. It's better for a test's success or failure to not depend on what some other prior test case did to the global state. To that end, clear and reinitialize the test context at the start of each test case, so whatever prior test cases did doesn't affect future tests. This is particularly relevant to this series because the events test's mremap of area_dst screws up assumptions the minor fault test was relying on. This wasn't a problem for hugetlb, as we don't mremap in that case. Signed-off-by: Axel Rasmussen --- tools/testing/selftests/vm/userfaultfd.c | 221 +++++++++++++---------- 1 file changed, 127 insertions(+), 94 deletions(-) diff --git a/tools/testing/selftests/vm/userfaultfd.c b/tools/testing/selftests/vm/userfaultfd.c index 1f65c4ab7994..0ff01f437a39 100644 --- a/tools/testing/selftests/vm/userfaultfd.c +++ b/tools/testing/selftests/vm/userfaultfd.c @@ -89,7 +89,8 @@ static int shm_fd; static int huge_fd; static char *huge_fd_off0; static unsigned long long *count_verify; -static int uffd, uffd_flags, finished, *pipefd; +static int uffd = -1; +static int uffd_flags, finished, *pipefd; static char *area_src, *area_src_alias, *area_dst, *area_dst_alias; static char *zeropage; pthread_attr_t attr; @@ -342,6 +343,121 @@ static struct uffd_test_ops hugetlb_uffd_test_ops = { static struct uffd_test_ops *uffd_test_ops; +static int userfaultfd_open(uint64_t *features) +{ + struct uffdio_api uffdio_api; + + uffd = syscall(__NR_userfaultfd, O_CLOEXEC | O_NONBLOCK); + if (uffd < 0) + err("userfaultfd syscall not available in this kernel"); + uffd_flags = fcntl(uffd, F_GETFD, NULL); + + uffdio_api.api = UFFD_API; + uffdio_api.features = *features; + if (ioctl(uffd, UFFDIO_API, &uffdio_api)) + err("UFFDIO_API failed.\nPlease make sure to " + "run with either root or ptrace capability."); + if (uffdio_api.api != UFFD_API) + err("UFFDIO_API error: %" PRIu64, (uint64_t)uffdio_api.api); + + *features = uffdio_api.features; + return 0; +} + +static int uffd_test_ctx_init_ext(uint64_t *features) +{ + unsigned long nr, cpu; + + uffd_test_ops->allocate_area((void **)&area_src); + if (!area_src) + return 1; + uffd_test_ops->allocate_area((void **)&area_dst); + if (!area_dst) + return 1; + + uffd_test_ops->release_pages(area_src); + uffd_test_ops->release_pages(area_dst); + + if (userfaultfd_open(features)) + return 1; + + count_verify = malloc(nr_pages * sizeof(unsigned long long)); + if (!count_verify) + err("count_verify"); + + for (nr = 0; nr < nr_pages; nr++) { + *area_mutex(area_src, nr) = + (pthread_mutex_t)PTHREAD_MUTEX_INITIALIZER; + count_verify[nr] = *area_count(area_src, nr) = 1; + /* + * In the transition between 255 to 256, powerpc will + * read out of order in my_bcmp and see both bytes as + * zero, so leave a placeholder below always non-zero + * after the count, to avoid my_bcmp to trigger false + * positives. + */ + *(area_count(area_src, nr) + 1) = 1; + } + + pipefd = malloc(sizeof(int) * nr_cpus * 2); + if (!pipefd) + err("pipefd"); + for (cpu = 0; cpu < nr_cpus; cpu++) + if (pipe2(&pipefd[cpu * 2], O_CLOEXEC | O_NONBLOCK)) + err("pipe"); + + return 0; +} + +static inline int uffd_test_ctx_init(uint64_t features) +{ + return uffd_test_ctx_init_ext(&features); +} + +static inline int munmap_area(void **area) +{ + if (*area) + if (munmap(*area, nr_pages * page_size)) + err("munmap"); + + *area = NULL; + return 0; +} + +static int uffd_test_ctx_clear(void) +{ + int ret = 0; + size_t i; + + if (pipefd) { + for (i = 0; i < nr_cpus * 2; ++i) { + if (close(pipefd[i])) + err("close pipefd"); + } + free(pipefd); + pipefd = NULL; + } + + if (count_verify) { + free(count_verify); + count_verify = NULL; + } + + if (uffd != -1) { + if (close(uffd)) + err("close uffd"); + uffd = -1; + } + + huge_fd_off0 = NULL; + ret |= munmap_area((void **)&area_src); + ret |= munmap_area((void **)&area_src_alias); + ret |= munmap_area((void **)&area_dst); + ret |= munmap_area((void **)&area_dst_alias); + + return ret; +} + static int my_bcmp(char *str1, char *str2, size_t n) { unsigned long i; @@ -726,40 +842,6 @@ static int stress(struct uffd_stats *uffd_stats) return 0; } -static int userfaultfd_open_ext(uint64_t *features) -{ - struct uffdio_api uffdio_api; - - uffd = syscall(__NR_userfaultfd, O_CLOEXEC | O_NONBLOCK | UFFD_USER_MODE_ONLY); - if (uffd < 0) { - fprintf(stderr, - "userfaultfd syscall not available in this kernel\n"); - return 1; - } - uffd_flags = fcntl(uffd, F_GETFD, NULL); - - uffdio_api.api = UFFD_API; - uffdio_api.features = *features; - if (ioctl(uffd, UFFDIO_API, &uffdio_api)) { - fprintf(stderr, "UFFDIO_API failed.\nPlease make sure to " - "run with either root or ptrace capability.\n"); - return 1; - } - if (uffdio_api.api != UFFD_API) { - fprintf(stderr, "UFFDIO_API error: %" PRIu64 "\n", - (uint64_t)uffdio_api.api); - return 1; - } - - *features = uffdio_api.features; - return 0; -} - -static int userfaultfd_open(uint64_t features) -{ - return userfaultfd_open_ext(&features); -} - sigjmp_buf jbuf, *sigbuf; static void sighndl(int sig, siginfo_t *siginfo, void *ptr) @@ -868,6 +950,8 @@ static int faulting_process(int signal_test) MREMAP_MAYMOVE | MREMAP_FIXED, area_src); if (area_dst == MAP_FAILED) err("mremap"); + /* Reset area_src since we just clobbered it */ + area_src = NULL; for (; nr < nr_pages; nr++) { count = *area_count(area_dst, nr); @@ -961,10 +1045,9 @@ static int userfaultfd_zeropage_test(void) printf("testing UFFDIO_ZEROPAGE: "); fflush(stdout); - uffd_test_ops->release_pages(area_dst); - - if (userfaultfd_open(0)) + if (uffd_test_ctx_clear() || uffd_test_ctx_init(0)) return 1; + uffdio_register.range.start = (unsigned long) area_dst; uffdio_register.range.len = nr_pages * page_size; uffdio_register.mode = UFFDIO_REGISTER_MODE_MISSING; @@ -981,7 +1064,6 @@ static int userfaultfd_zeropage_test(void) if (my_bcmp(area_dst, zeropage, page_size)) err("zeropage is not zero"); - close(uffd); printf("done.\n"); return 0; } @@ -999,12 +1081,11 @@ static int userfaultfd_events_test(void) printf("testing events (fork, remap, remove): "); fflush(stdout); - uffd_test_ops->release_pages(area_dst); - features = UFFD_FEATURE_EVENT_FORK | UFFD_FEATURE_EVENT_REMAP | UFFD_FEATURE_EVENT_REMOVE; - if (userfaultfd_open(features)) + if (uffd_test_ctx_clear() || uffd_test_ctx_init(features)) return 1; + fcntl(uffd, F_SETFL, uffd_flags | O_NONBLOCK); uffdio_register.range.start = (unsigned long) area_dst; @@ -1037,8 +1118,6 @@ static int userfaultfd_events_test(void) if (pthread_join(uffd_mon, NULL)) return 1; - close(uffd); - uffd_stats_report(&stats, 1); return stats.missing_faults != nr_pages; @@ -1058,11 +1137,10 @@ static int userfaultfd_sig_test(void) printf("testing signal delivery: "); fflush(stdout); - uffd_test_ops->release_pages(area_dst); - features = UFFD_FEATURE_EVENT_FORK|UFFD_FEATURE_SIGBUS; - if (userfaultfd_open(features)) + if (uffd_test_ctx_clear() || uffd_test_ctx_init(features)) return 1; + fcntl(uffd, F_SETFL, uffd_flags | O_NONBLOCK); uffdio_register.range.start = (unsigned long) area_dst; @@ -1126,9 +1204,7 @@ static int userfaultfd_minor_test(void) printf("testing minor faults: "); fflush(stdout); - uffd_test_ops->release_pages(area_dst); - - if (userfaultfd_open_ext(&features)) + if (uffd_test_ctx_clear() || uffd_test_ctx_init_ext(&features)) return 1; /* If kernel reports the feature isn't supported, skip the test. */ if (!(features & UFFD_FEATURE_MINOR_HUGETLBFS)) { @@ -1183,8 +1259,6 @@ static int userfaultfd_minor_test(void) if (pthread_join(uffd_mon, NULL)) return 1; - close(uffd); - uffd_stats_report(&stats, 1); return stats.missing_faults != 0 || stats.minor_faults != nr_pages; @@ -1196,50 +1270,10 @@ static int userfaultfd_stress(void) char *tmp_area; unsigned long nr; struct uffdio_register uffdio_register; - unsigned long cpu; struct uffd_stats uffd_stats[nr_cpus]; - uffd_test_ops->allocate_area((void **)&area_src); - if (!area_src) - return 1; - uffd_test_ops->allocate_area((void **)&area_dst); - if (!area_dst) - return 1; - - if (userfaultfd_open(0)) - return 1; - - count_verify = malloc(nr_pages * sizeof(unsigned long long)); - if (!count_verify) { - perror("count_verify"); - return 1; - } - - for (nr = 0; nr < nr_pages; nr++) { - *area_mutex(area_src, nr) = (pthread_mutex_t) - PTHREAD_MUTEX_INITIALIZER; - count_verify[nr] = *area_count(area_src, nr) = 1; - /* - * In the transition between 255 to 256, powerpc will - * read out of order in my_bcmp and see both bytes as - * zero, so leave a placeholder below always non-zero - * after the count, to avoid my_bcmp to trigger false - * positives. - */ - *(area_count(area_src, nr) + 1) = 1; - } - - pipefd = malloc(sizeof(int) * nr_cpus * 2); - if (!pipefd) { - perror("pipefd"); + if (uffd_test_ctx_init(0)) return 1; - } - for (cpu = 0; cpu < nr_cpus; cpu++) { - if (pipe2(&pipefd[cpu*2], O_CLOEXEC | O_NONBLOCK)) { - perror("pipe"); - return 1; - } - } if (posix_memalign(&area, page_size, page_size)) err("out of memory"); @@ -1360,7 +1394,6 @@ static int userfaultfd_stress(void) uffd_stats_report(uffd_stats, nr_cpus); } - close(uffd); return userfaultfd_zeropage_test() || userfaultfd_sig_test() || userfaultfd_events_test() || userfaultfd_minor_test(); } From patchwork Tue Apr 13 05:17:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 12199317 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 20E51C43616 for ; Tue, 13 Apr 2021 05:17:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BDE54613AB for ; Tue, 13 Apr 2021 05:17:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BDE54613AB Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3E6306B0078; Tue, 13 Apr 2021 01:17:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 347EB6B007B; Tue, 13 Apr 2021 01:17:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 14B7A6B007D; Tue, 13 Apr 2021 01:17:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0056.hostedemail.com [216.40.44.56]) by kanga.kvack.org (Postfix) with ESMTP id DE9FD6B0078 for ; Tue, 13 Apr 2021 01:17:40 -0400 (EDT) Received: from smtpin40.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id A14663629 for ; Tue, 13 Apr 2021 05:17:40 +0000 (UTC) X-FDA: 78026186280.40.F85C8C3 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by imf07.hostedemail.com (Postfix) with ESMTP id 07B77A0000FF for ; Tue, 13 Apr 2021 05:17:39 +0000 (UTC) Received: by mail-yb1-f201.google.com with SMTP id h69so10507835ybg.10 for ; Mon, 12 Apr 2021 22:17:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=l2i93c0D52SLfGu2rkvXcmRcnJGNafhRbk6pUsLSI1w=; b=C4Ph3T5LPg46cqLVVraagPCEdHhGh1gYl/hhbULoTKxzkji59i9JdW05JgvYQBh82z gcxc4JXlELbBwT/dcsyu5fbUHU5BTVSvapR0E8czq77gOndoTR3aC+/pNIUsD6KmQwaV lNXBy1mIuuS+pfwaJm3GwARldwPKpnJxjAhAR9Z+7+E55GDGhZAuAypUnX3XH9DeM+Sz FzxGFyX5WnOvJhrGuAUETMqbKDgKkOMBAGj0LA+D8JWsn2/rnqqvYC9RioKQd4/FwJJl YGwfVuWwhkTwI74ysw7FeByB6Ljdih1RijuGpZPgpdKJL8DrXmbZUCopdKaQfnALYL2D ThFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=l2i93c0D52SLfGu2rkvXcmRcnJGNafhRbk6pUsLSI1w=; b=OlnDSmXzxUh11/qXI18m9NOS4qRhtR+ULYGInPDyArU88JMHQPhJTFpEZxAy+CjGCb VoPOe/Z3SSO7ZhFv1BAMabtDhcjNg8pvw+zWmDRjIh2vop3sQOIgJiL88x04mvVgs1tf kteZ5fcWuJlBTdNDwmWoSWQ2368Mp0ADWW9HrorZ8Q60d3zTGeiL7RteruO5+wfJQNUo eV5we1GyBfQwykSVKGZjiHrwgRf1eRGZ7ED83NgFYyaPtku03m4T/HcVfMOuo3rmTWEV vLDoo1nwtZU/nA57hnfrdoQWGz62+P858mrpDgyVQ4+aAiED8WZSMsAST8LkyL6esS9z B04g== X-Gm-Message-State: AOAM532cALfTNlvLnpqyqcdnLHsq0ecozujJWSd5Pbs4A6R8RDn42gsz FG2A6K/CME1fgRCY6DdIgcMm0O0XnTV3aItjjvIC X-Google-Smtp-Source: ABdhPJzYi6y7kbLqWoNzDFmc0mNDDDXQIxQB5nuQA+FyuaB0LaEXxS55HjozjFHHAJ+1hNmZL7XJDp6yJLi6F9f+pHTq X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:d508:eee5:2d57:3e32]) (user=axelrasmussen job=sendgmr) by 2002:a25:6c85:: with SMTP id h127mr39041723ybc.341.1618291059410; Mon, 12 Apr 2021 22:17:39 -0700 (PDT) Date: Mon, 12 Apr 2021 22:17:20 -0700 In-Reply-To: <20210413051721.2896915-1-axelrasmussen@google.com> Message-Id: <20210413051721.2896915-9-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210413051721.2896915-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.31.1.295.g9ea45b61b8-goog Subject: [PATCH v2 8/9] userfaultfd/selftests: exercise minor fault handling shmem support From: Axel Rasmussen To: Alexander Viro , Andrea Arcangeli , Andrew Morton , Hugh Dickins , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Kravetz , Mike Rapoport , Peter Xu , Shaohua Li , Shuah Khan , Stephen Rothwell , Wang Qing Cc: linux-api@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, Axel Rasmussen , Brian Geffon , "Dr . David Alan Gilbert" , Mina Almasry , Oliver Upton X-Rspamd-Queue-Id: 07B77A0000FF X-Stat-Signature: gjafszedasek31wrhxijcej5ibguxuq9 X-Rspamd-Server: rspam02 Received-SPF: none (flex--axelrasmussen.bounces.google.com>: No applicable sender policy available) receiver=imf07; identity=mailfrom; envelope-from="<3cyl1YA0KCKQEbIPVEWQYWWIRKSSKPI.GSQPMRYb-QQOZEGO.SVK@flex--axelrasmussen.bounces.google.com>"; helo=mail-yb1-f201.google.com; client-ip=209.85.219.201 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1618291059-688727 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Enable test_uffdio_minor for test_type == TEST_SHMEM, and modify the test slightly to pass in / check for the right feature flags. Signed-off-by: Axel Rasmussen --- tools/testing/selftests/vm/userfaultfd.c | 29 ++++++++++++++++++++---- 1 file changed, 25 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/vm/userfaultfd.c b/tools/testing/selftests/vm/userfaultfd.c index 0ff01f437a39..0830f155e6c2 100644 --- a/tools/testing/selftests/vm/userfaultfd.c +++ b/tools/testing/selftests/vm/userfaultfd.c @@ -484,6 +484,7 @@ static void wp_range(int ufd, __u64 start, __u64 len, bool wp) static void continue_range(int ufd, __u64 start, __u64 len) { struct uffdio_continue req; + int ret; req.range.start = start; req.range.len = len; @@ -492,6 +493,17 @@ static void continue_range(int ufd, __u64 start, __u64 len) if (ioctl(ufd, UFFDIO_CONTINUE, &req)) err("UFFDIO_CONTINUE failed for address 0x%" PRIx64, (uint64_t)start); + + /* + * Error handling within the kernel for continue is subtly different + * from copy or zeropage, so it may be a source of bugs. Trigger an + * error (-EEXIST) on purpose, to verify doing so doesn't cause a BUG. + */ + req.mapped = 0; + ret = ioctl(ufd, UFFDIO_CONTINUE, &req); + if (ret >= 0 || req.mapped != -EEXIST) + err("failed to exercise UFFDIO_CONTINUE error handling, ret=%d, mapped=%" PRId64, + ret, (int64_t) req.mapped); } static void *locking_thread(void *arg) @@ -1196,7 +1208,7 @@ static int userfaultfd_minor_test(void) void *expected_page; char c; struct uffd_stats stats = { 0 }; - uint64_t features = UFFD_FEATURE_MINOR_HUGETLBFS; + uint64_t req_features, features_out; if (!test_uffdio_minor) return 0; @@ -1204,10 +1216,18 @@ static int userfaultfd_minor_test(void) printf("testing minor faults: "); fflush(stdout); - if (uffd_test_ctx_clear() || uffd_test_ctx_init_ext(&features)) + if (test_type == TEST_HUGETLB) + req_features = UFFD_FEATURE_MINOR_HUGETLBFS; + else if (test_type == TEST_SHMEM) + req_features = UFFD_FEATURE_MINOR_SHMEM; + else + return 1; + + features_out = req_features; + if (uffd_test_ctx_clear() || uffd_test_ctx_init_ext(&features_out)) return 1; - /* If kernel reports the feature isn't supported, skip the test. */ - if (!(features & UFFD_FEATURE_MINOR_HUGETLBFS)) { + /* If kernel reports required features aren't supported, skip test. */ + if ((features_out & req_features) != req_features) { printf("skipping test due to lack of feature support\n"); fflush(stdout); return 0; @@ -1442,6 +1462,7 @@ static void set_test_type(const char *type) map_shared = true; test_type = TEST_SHMEM; uffd_test_ops = &shmem_uffd_test_ops; + test_uffdio_minor = true; } else { err("Unknown test type: %s", type); } From patchwork Tue Apr 13 05:17:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 12199319 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2B5E1C4363E for ; Tue, 13 Apr 2021 05:17:47 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CE0D46109F for ; Tue, 13 Apr 2021 05:17:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CE0D46109F Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0633A6B007B; Tue, 13 Apr 2021 01:17:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F09416B007D; Tue, 13 Apr 2021 01:17:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D0C016B007E; Tue, 13 Apr 2021 01:17:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0200.hostedemail.com [216.40.44.200]) by kanga.kvack.org (Postfix) with ESMTP id AEFF56B007B for ; Tue, 13 Apr 2021 01:17:42 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 6258382499A8 for ; Tue, 13 Apr 2021 05:17:42 +0000 (UTC) X-FDA: 78026186364.04.5CA52E2 Received: from mail-qk1-f201.google.com (mail-qk1-f201.google.com [209.85.222.201]) by imf05.hostedemail.com (Postfix) with ESMTP id 6E19DE000102 for ; Tue, 13 Apr 2021 05:17:41 +0000 (UTC) Received: by mail-qk1-f201.google.com with SMTP id b127so10429653qkf.19 for ; Mon, 12 Apr 2021 22:17:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Zgnf19a0ZqBaTaEBX8AKF87dozcAw085CXjJ1xc2G4c=; b=rqmxGiY4BnZEcWcCJ+Y4vbbSqxqZkFX6FuLF06wYSiqiRFj2LBTS8LLSDpXxq7lMBQ o1/mdFqL/Uz3+JOkn0Jue9LNkOwwljV5BRnIvnUl9KGbag7eHngfvvGNmTLx+nXlnDTd 7VUCIt7IyXYtkNy8DDa9eov2tj+2HYt/8dtNrZeOHwpUwFGGGjXLANcpQIfhtD/pKprM Yfuj+k1QdImYGkVGyfGiN7bGeOflMKLmMbxIO07eeZF8SAq6m076BaCNiz1tjxPpMrGv RZmBTppvtn/efQKL/L55N+t7JEdEjC9eWgKjygFvdeh8P6btLSwC2PQefmW+cw3qQsAr agYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Zgnf19a0ZqBaTaEBX8AKF87dozcAw085CXjJ1xc2G4c=; b=t8Eb+k4qTVHu1TQGSSmwwUzznETDNpl60FmJl0WGqOw+jSjss4D/2DNCZIoH1zlusy CERHBnpIf/rlm+walULk+DDqzo0XAow1RIq5hy6k63JGBEpvYaWZIOp09HmqdBCXlY3a 2KOhVFXz02IJBY3fiMjWXno8OTTxaHMQctO6HlK3WQQ9WKXMI/XPFSfN93Q1KtcyoyVp fVDePbX2kdDRTKa6Pehx+/oBY0INnwq29X0j2oKoVbhK8k9DQBgL+ZSAHLap8BsH811d UjW+aL+5Z2LpVxx5GCTul91rSVXxWX5JMuBcUde3bD/MIjPlska24H/IYnHIwZTEu7vE RVLA== X-Gm-Message-State: AOAM533oeUW+ZxJWUYO6B47QIkCY8BPJ46mwR7Z9nFk4JWotFxQ7Sd1o t1u1v+pAy70zCk9Zq/0Up7NyrbKtuY7Lr5QvmMZK X-Google-Smtp-Source: ABdhPJyql4cAl964Q5k7K3SfAR0oDzEdEWKdoNew6yg0Gt+a+L3rIdX6M21mQJNnFlN9UAkKjIzHIO4lAPEqRgyC0Dyr X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:d508:eee5:2d57:3e32]) (user=axelrasmussen job=sendgmr) by 2002:a0c:ff48:: with SMTP id y8mr31802165qvt.8.1618291061296; Mon, 12 Apr 2021 22:17:41 -0700 (PDT) Date: Mon, 12 Apr 2021 22:17:21 -0700 In-Reply-To: <20210413051721.2896915-1-axelrasmussen@google.com> Message-Id: <20210413051721.2896915-10-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210413051721.2896915-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.31.1.295.g9ea45b61b8-goog Subject: [PATCH v2 9/9] userfaultfd/shmem: modify shmem_mcopy_atomic_pte to use install_ptes From: Axel Rasmussen To: Alexander Viro , Andrea Arcangeli , Andrew Morton , Hugh Dickins , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Kravetz , Mike Rapoport , Peter Xu , Shaohua Li , Shuah Khan , Stephen Rothwell , Wang Qing Cc: linux-api@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, Axel Rasmussen , Brian Geffon , "Dr . David Alan Gilbert" , Mina Almasry , Oliver Upton X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 6E19DE000102 X-Stat-Signature: ktbxb679jysmw9r6mr9c9irnmhesprx9 Received-SPF: none (flex--axelrasmussen.bounces.google.com>: No applicable sender policy available) receiver=imf05; identity=mailfrom; envelope-from="<3dSl1YA0KCKYGdKRXGYSaYYKTMUUMRK.IUSROTad-SSQbGIQ.UXM@flex--axelrasmussen.bounces.google.com>"; helo=mail-qk1-f201.google.com; client-ip=209.85.222.201 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1618291061-985149 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In a previous commit, we added the mcopy_atomic_install_ptes() helper. This helper does the job of setting up PTEs for an existing page, to map it into a given VMA. It deals with both the anon and shmem cases, as well as the shared and private cases. In other words, shmem_mcopy_atomic_pte() duplicates a case it already handles. So, expose it, and let shmem_mcopy_atomic_pte() use it directly, to reduce code duplication. This requires that we refactor shmem_mcopy_atomic-pte() a bit: Instead of doing accounting (shmem_recalc_inode() et al) part-way through the PTE setup, do it beforehand. This frees up mcopy_atomic_install_ptes() from having to care about this accounting, but it does mean we need to clean it up if we get a failure afterwards (shmem_uncharge()). We can *almost* use shmem_charge() to do this, reducing code duplication. But, it does `inode->i_mapping->nrpages++`, which would double-count since shmem_add_to_page_cache() also does this. Signed-off-by: Axel Rasmussen --- include/linux/userfaultfd_k.h | 5 ++++ mm/shmem.c | 52 +++++++---------------------------- mm/userfaultfd.c | 25 ++++++++--------- 3 files changed, 27 insertions(+), 55 deletions(-) diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h index 794d1538b8ba..3e20bfa9ef80 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -53,6 +53,11 @@ enum mcopy_atomic_mode { MCOPY_ATOMIC_CONTINUE, }; +extern int mcopy_atomic_install_ptes(struct mm_struct *dst_mm, pmd_t *dst_pmd, + struct vm_area_struct *dst_vma, + unsigned long dst_addr, struct page *page, + bool newly_allocated, bool wp_copy); + extern ssize_t mcopy_atomic(struct mm_struct *dst_mm, unsigned long dst_start, unsigned long src_start, unsigned long len, bool *mmap_changing, __u64 mode); diff --git a/mm/shmem.c b/mm/shmem.c index 3f48cb5e8404..9b12298405a4 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2376,10 +2376,8 @@ int shmem_mcopy_atomic_pte(struct mm_struct *dst_mm, struct address_space *mapping = inode->i_mapping; gfp_t gfp = mapping_gfp_mask(mapping); pgoff_t pgoff = linear_page_index(dst_vma, dst_addr); - spinlock_t *ptl; void *page_kaddr; struct page *page; - pte_t _dst_pte, *dst_pte; int ret; pgoff_t max_off; @@ -2389,8 +2387,10 @@ int shmem_mcopy_atomic_pte(struct mm_struct *dst_mm, if (!*pagep) { page = shmem_alloc_page(gfp, info, pgoff); - if (!page) - goto out_unacct_blocks; + if (!page) { + shmem_inode_unacct_blocks(inode, 1); + goto out; + } if (!zeropage) { /* COPY */ page_kaddr = kmap_atomic(page); @@ -2430,59 +2430,27 @@ int shmem_mcopy_atomic_pte(struct mm_struct *dst_mm, if (ret) goto out_release; - _dst_pte = mk_pte(page, dst_vma->vm_page_prot); - if (dst_vma->vm_flags & VM_WRITE) - _dst_pte = pte_mkwrite(pte_mkdirty(_dst_pte)); - else { - /* - * We don't set the pte dirty if the vma has no - * VM_WRITE permission, so mark the page dirty or it - * could be freed from under us. We could do it - * unconditionally before unlock_page(), but doing it - * only if VM_WRITE is not set is faster. - */ - set_page_dirty(page); - } - - dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl); - - ret = -EFAULT; - max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); - if (unlikely(pgoff >= max_off)) - goto out_release_unlock; - - ret = -EEXIST; - if (!pte_none(*dst_pte)) - goto out_release_unlock; - - lru_cache_add(page); - spin_lock_irq(&info->lock); info->alloced++; inode->i_blocks += BLOCKS_PER_PAGE; shmem_recalc_inode(inode); spin_unlock_irq(&info->lock); - inc_mm_counter(dst_mm, mm_counter_file(page)); - page_add_file_rmap(page, false); - set_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte); + ret = mcopy_atomic_install_ptes(dst_mm, dst_pmd, dst_vma, dst_addr, + page, true, false); + if (ret) + goto out_release_uncharge; - /* No need to invalidate - it was non-present before */ - update_mmu_cache(dst_vma, dst_addr, dst_pte); - pte_unmap_unlock(dst_pte, ptl); unlock_page(page); ret = 0; out: return ret; -out_release_unlock: - pte_unmap_unlock(dst_pte, ptl); - ClearPageDirty(page); +out_release_uncharge: delete_from_page_cache(page); + shmem_uncharge(inode, 1); out_release: unlock_page(page); put_page(page); -out_unacct_blocks: - shmem_inode_unacct_blocks(inode, 1); goto out; } #endif /* CONFIG_USERFAULTFD */ diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 8df0438f5d6a..3f73ba0b99f0 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -51,18 +51,13 @@ struct vm_area_struct *find_dst_vma(struct mm_struct *dst_mm, /* * Install PTEs, to map dst_addr (within dst_vma) to page. * - * This function handles MCOPY_ATOMIC_CONTINUE (which is always file-backed), - * whether or not dst_vma is VM_SHARED. It also handles the more general - * MCOPY_ATOMIC_NORMAL case, when dst_vma is *not* VM_SHARED (it may be file - * backed, or not). - * - * Note that MCOPY_ATOMIC_NORMAL for a VM_SHARED dst_vma is handled by - * shmem_mcopy_atomic_pte instead. + * This function handles both MCOPY_ATOMIC_NORMAL and _CONTINUE for both shmem + * and anon, and for both shared and private VMAs. */ -static int mcopy_atomic_install_ptes(struct mm_struct *dst_mm, pmd_t *dst_pmd, - struct vm_area_struct *dst_vma, - unsigned long dst_addr, struct page *page, - bool newly_allocated, bool wp_copy) +int mcopy_atomic_install_ptes(struct mm_struct *dst_mm, pmd_t *dst_pmd, + struct vm_area_struct *dst_vma, + unsigned long dst_addr, struct page *page, + bool newly_allocated, bool wp_copy) { int ret; pte_t _dst_pte, *dst_pte; @@ -116,8 +111,12 @@ static int mcopy_atomic_install_ptes(struct mm_struct *dst_mm, pmd_t *dst_pmd, else page_add_new_anon_rmap(page, dst_vma, dst_addr, false); - if (newly_allocated) - lru_cache_add_inactive_or_unevictable(page, dst_vma); + if (newly_allocated) { + if (vma_is_shmem(dst_vma) && vm_shared) + lru_cache_add(page); + else + lru_cache_add_inactive_or_unevictable(page, dst_vma); + } set_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte);