From patchwork Tue Apr 20 22:07:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 12215111 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB927C43461 for ; Tue, 20 Apr 2021 22:08:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 93BC861400 for ; Tue, 20 Apr 2021 22:08:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234241AbhDTWIr (ORCPT ); Tue, 20 Apr 2021 18:08:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40322 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234201AbhDTWIn (ORCPT ); Tue, 20 Apr 2021 18:08:43 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4D27FC06138D for ; Tue, 20 Apr 2021 15:08:10 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id k5-20020a2524050000b02904e716d0d7b1so13313404ybk.0 for ; Tue, 20 Apr 2021 15:08:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=BHYviFd/YPB5fVcgCUB7J8bATrNglnMGKIuCG5hxWmE=; b=F0MCudZTC5xOkb1RbDntraZ92HIBRT2ez5Xhnsp+DXLuS8Ep5h2F99v8xIdO5zxY1b Ofcj74faZHlYP3e+3wnOZD5ZfzF2jLBMby2Jeiefq2bLIQsWDk7/JWsoMukk7UYrpSyV nKtYxWXQfTaV3Qb7U4U55Mf1pliJt3K1GW0Xy+qcsjJVyUQxYiAzbRCpwE5AzkhugM65 dUWgT4kjntrfbW0RMCruU2f51LKy6LcZBAM2caerJaeA0zjRANie+zQxZlILDItwLDj1 Lrk57AR8MByVdyJgsNEH6mrlEeMHKFgKXZ0fckx77ByEyHiS6pOa0GxHKc+5Bvhh+cqX 388g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=BHYviFd/YPB5fVcgCUB7J8bATrNglnMGKIuCG5hxWmE=; b=I2JCvHanOjN9qDXTj1kuNAC/OyI2tp6Aopm0QjLmyvxYu/B7iHEtZUjniemOBM/FDx UbP4ALYmkzS/yrhK/qlIm1HVdiClqh0t3fm15z9iZms3G4Bh189dvbzI8VECsG/Ia4JA y9Z94bHDnLbCL7Pq0SS4On0MlbicDD2M2/Eb2lR+66Gbzc3YLRIOvBfgSkwk+Tx1hTGf HbaiXoL55X687PJsZnP4KPQ8JmMtcjW6O/tuF5C3BWc9vTbEnR2sAU7AeJUn22QMwA+l WlhHJZZ4spvVnvgmo09CUBhxsTqBH+YmhtF5dddyRefbKe6Muwl6jyfWG/VEfHdo5N4W 6efA== X-Gm-Message-State: AOAM531ASrnvAQKrLJAR5tIFCd3whDvnH48ogo6G67pxdMTn3hL6Wy+K UH0P8ssS3p5RUpzBTNuz8DfkIsGU77FoDOxbKMBV X-Google-Smtp-Source: ABdhPJz8msn+LFt/WywuGSj91p/Qsj7QaNCaSl+USoDSKPtsRqeJe36wlMPHaxC+iC+8Fegj3k+3HwnxarEYbn2SCSQE X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:c40e:ee2c:2ab8:257a]) (user=axelrasmussen job=sendgmr) by 2002:a25:bd83:: with SMTP id f3mr27137266ybh.29.1618956489545; Tue, 20 Apr 2021 15:08:09 -0700 (PDT) Date: Tue, 20 Apr 2021 15:07:55 -0700 In-Reply-To: <20210420220804.486803-1-axelrasmussen@google.com> Message-Id: <20210420220804.486803-2-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210420220804.486803-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.31.1.368.gbe11c130af-goog Subject: [PATCH v4 01/10] userfaultfd/hugetlbfs: avoid including userfaultfd_k.h in hugetlb.h From: Axel Rasmussen To: Alexander Viro , Andrea Arcangeli , Andrew Morton , Hugh Dickins , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Kravetz , Mike Rapoport , Peter Xu , Shaohua Li , Shuah Khan , Stephen Rothwell , Wang Qing Cc: linux-api@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, Axel Rasmussen , Brian Geffon , "Dr . David Alan Gilbert" , Mina Almasry , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Minimizing header file inclusion is desirable. In this case, we can do so just by forward declaring the enumeration our signature relies upon. Reviewed-by: Peter Xu Signed-off-by: Axel Rasmussen Acked-by: Hugh Dickins --- include/linux/hugetlb.h | 4 +++- mm/hugetlb.c | 1 + 2 files changed, 4 insertions(+), 1 deletion(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 09f1fd12a6fa..ca8868cdac16 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -11,11 +11,11 @@ #include #include #include -#include struct ctl_table; struct user_struct; struct mmu_gather; +enum mcopy_atomic_mode; #ifndef is_hugepd typedef struct { unsigned long pd; } hugepd_t; @@ -135,6 +135,7 @@ void hugetlb_show_meminfo(void); unsigned long hugetlb_total_pages(void); vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long address, unsigned int flags); + #ifdef CONFIG_USERFAULTFD int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm, pte_t *dst_pte, struct vm_area_struct *dst_vma, @@ -143,6 +144,7 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm, pte_t *dst_pte, enum mcopy_atomic_mode mode, struct page **pagep); #endif /* CONFIG_USERFAULTFD */ + bool hugetlb_reserve_pages(struct inode *inode, long from, long to, struct vm_area_struct *vma, vm_flags_t vm_flags); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 54d81d5947ed..b1652e747318 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -40,6 +40,7 @@ #include #include #include +#include #include "internal.h" int hugetlb_max_hstate __read_mostly; From patchwork Tue Apr 20 22:07:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 12215113 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24871C433B4 for ; Tue, 20 Apr 2021 22:08:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EFCCB61401 for ; Tue, 20 Apr 2021 22:08:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234252AbhDTWIr (ORCPT ); Tue, 20 Apr 2021 18:08:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40344 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234237AbhDTWIq (ORCPT ); Tue, 20 Apr 2021 18:08:46 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1B0E1C06174A for ; Tue, 20 Apr 2021 15:08:13 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id u73-20020a25ab4f0000b0290410f38a2f81so13263254ybi.22 for ; Tue, 20 Apr 2021 15:08:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=a4VdN5aYyzVjjL2HqfgoVnRM8+bC3CDsNcvljqmhvtc=; b=uzsUF7c/nqA0cryXPZz+BzIxEuT0aA5ywKIJYlO4mqAho5vbHY/zWzehPflreVIG2/ JmIb9Wuyf2YoHlpe0i9pngk6fbfyjVY6DpRdoIbNjBCDI2n5HVfOppTpjFigNVuf02hq woZKzepGZTNJeHTdUWkQQl6fboDizm2ND/smnXnIVwVyWAMT9F+TlcudnMZRUBKcACZL izVfxyZlL58aCX8mwvjgtLgTG0Y0mZHmRnb0cYpFEWV7v+7/OaBnlhzXU83w1MtVu94/ VLimMmDqIhWhnzofYMN7Q/hVn9nXM+V3tjFHz7pVP83F5Oa6XOdrxquM5wckyUfCXu3R BfQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=a4VdN5aYyzVjjL2HqfgoVnRM8+bC3CDsNcvljqmhvtc=; b=D1jiHVQ+I4SnOvhrpl8JYRoBcfAD0yhJ3Tyzo9iVe7taSeeYlknRNk8EQgSGrnjeAB YBxrh0EBwqLqu2LRwLowae913omiUK8zEDcIYThNUfjYL8rYlH2XpPpdKEnDLiF0QNe/ MSv2UXr9L+ecotfSrHM+TiH2lRWV0aVy81RY58rWyK9p+fz7jBia/oJ6JB5lIGy1uKzK 7aEXU24sujix4KeDbAEkt9p5NkW6W1zlQdRvWj0Z7IoQBctZzBR7SN0llGdFJjpaA48y iuzCU0USXKVPapgLgJsIbR1SPOp/Pm6g+wNvIJLwb4GsMU3vgfofo+2wPVVN1uWYXN4r JG3A== X-Gm-Message-State: AOAM532BRq8sSnnfr0TQcgiq4Pub0OcYKeeNWhke/sEI9Fhu5ftcuONM 0bXRHCLaDph3WLmDlw9+6dmlQrwr8vyLOgOQrL8V X-Google-Smtp-Source: ABdhPJzePMEMmP2y9ogK36u5OfBacHtf6kJR7MjKT30Aj+bxcaQdJ2gVnC91JVbD4LoXtfaXLsiMASeS7IpVHaLZl7KS X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:c40e:ee2c:2ab8:257a]) (user=axelrasmussen job=sendgmr) by 2002:a5b:28e:: with SMTP id x14mr10647700ybl.493.1618956492299; Tue, 20 Apr 2021 15:08:12 -0700 (PDT) Date: Tue, 20 Apr 2021 15:07:56 -0700 In-Reply-To: <20210420220804.486803-1-axelrasmussen@google.com> Message-Id: <20210420220804.486803-3-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210420220804.486803-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.31.1.368.gbe11c130af-goog Subject: [PATCH v4 02/10] userfaultfd/shmem: combine shmem_{mcopy_atomic,mfill_zeropage}_pte From: Axel Rasmussen To: Alexander Viro , Andrea Arcangeli , Andrew Morton , Hugh Dickins , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Kravetz , Mike Rapoport , Peter Xu , Shaohua Li , Shuah Khan , Stephen Rothwell , Wang Qing Cc: linux-api@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, Axel Rasmussen , Brian Geffon , "Dr . David Alan Gilbert" , Mina Almasry , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Previously, we did a dance where we had one calling path in userfaultfd.c (mfill_atomic_pte), but then we split it into two in shmem_fs.h (shmem_{mcopy_atomic,mfill_zeropage}_pte), and then rejoined into a single shared function in shmem.c (shmem_mfill_atomic_pte). This is all a bit overly complex. Just call the single combined shmem function directly, allowing us to clean up various branches, boilerplate, etc. While we're touching this function, two other small cleanup changes: - offset is equivalent to pgoff, so we can get rid of offset entirely. - Split two VM_BUG_ON cases into two statements. This means the line number reported when the BUG is hit specifies exactly which condition was true. Reviewed-by: Peter Xu Acked-by: Hugh Dickins Signed-off-by: Axel Rasmussen --- include/linux/shmem_fs.h | 17 ++++++------- mm/shmem.c | 52 +++++++++++++--------------------------- mm/userfaultfd.c | 10 +++----- 3 files changed, 26 insertions(+), 53 deletions(-) diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h index d82b6f396588..47c3409d02ac 100644 --- a/include/linux/shmem_fs.h +++ b/include/linux/shmem_fs.h @@ -122,21 +122,18 @@ static inline bool shmem_file(struct file *file) extern bool shmem_charge(struct inode *inode, long pages); extern void shmem_uncharge(struct inode *inode, long pages); +#ifdef CONFIG_USERFAULTFD #ifdef CONFIG_SHMEM extern int shmem_mcopy_atomic_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, struct vm_area_struct *dst_vma, unsigned long dst_addr, unsigned long src_addr, + bool zeropage, struct page **pagep); -extern int shmem_mfill_zeropage_pte(struct mm_struct *dst_mm, - pmd_t *dst_pmd, - struct vm_area_struct *dst_vma, - unsigned long dst_addr); -#else -#define shmem_mcopy_atomic_pte(dst_mm, dst_pte, dst_vma, dst_addr, \ - src_addr, pagep) ({ BUG(); 0; }) -#define shmem_mfill_zeropage_pte(dst_mm, dst_pmd, dst_vma, \ - dst_addr) ({ BUG(); 0; }) -#endif +#else /* !CONFIG_SHMEM */ +#define shmem_mcopy_atomic_pte(dst_mm, dst_pmd, dst_vma, dst_addr, \ + src_addr, zeropage, pagep) ({ BUG(); 0; }) +#endif /* CONFIG_SHMEM */ +#endif /* CONFIG_USERFAULTFD */ #endif diff --git a/mm/shmem.c b/mm/shmem.c index 26c76b13ad23..b72c55aa07fc 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2354,13 +2354,14 @@ static struct inode *shmem_get_inode(struct super_block *sb, const struct inode return inode; } -static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, - pmd_t *dst_pmd, - struct vm_area_struct *dst_vma, - unsigned long dst_addr, - unsigned long src_addr, - bool zeropage, - struct page **pagep) +#ifdef CONFIG_USERFAULTFD +int shmem_mcopy_atomic_pte(struct mm_struct *dst_mm, + pmd_t *dst_pmd, + struct vm_area_struct *dst_vma, + unsigned long dst_addr, + unsigned long src_addr, + bool zeropage, + struct page **pagep) { struct inode *inode = file_inode(dst_vma->vm_file); struct shmem_inode_info *info = SHMEM_I(inode); @@ -2372,7 +2373,7 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, struct page *page; pte_t _dst_pte, *dst_pte; int ret; - pgoff_t offset, max_off; + pgoff_t max_off; ret = -ENOMEM; if (!shmem_inode_acct_block(inode, 1)) @@ -2383,7 +2384,7 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, if (!page) goto out_unacct_blocks; - if (!zeropage) { /* mcopy_atomic */ + if (!zeropage) { /* COPY */ page_kaddr = kmap_atomic(page); ret = copy_from_user(page_kaddr, (const void __user *)src_addr, @@ -2397,7 +2398,7 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, /* don't free the page */ return -ENOENT; } - } else { /* mfill_zeropage_atomic */ + } else { /* ZEROPAGE */ clear_highpage(page); } } else { @@ -2405,15 +2406,15 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, *pagep = NULL; } - VM_BUG_ON(PageLocked(page) || PageSwapBacked(page)); + VM_BUG_ON(PageLocked(page)); + VM_BUG_ON(PageSwapBacked(page)); __SetPageLocked(page); __SetPageSwapBacked(page); __SetPageUptodate(page); ret = -EFAULT; - offset = linear_page_index(dst_vma, dst_addr); max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); - if (unlikely(offset >= max_off)) + if (unlikely(pgoff >= max_off)) goto out_release; ret = shmem_add_to_page_cache(page, mapping, pgoff, NULL, @@ -2439,7 +2440,7 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, ret = -EFAULT; max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); - if (unlikely(offset >= max_off)) + if (unlikely(pgoff >= max_off)) goto out_release_unlock; ret = -EEXIST; @@ -2476,28 +2477,7 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, shmem_inode_unacct_blocks(inode, 1); goto out; } - -int shmem_mcopy_atomic_pte(struct mm_struct *dst_mm, - pmd_t *dst_pmd, - struct vm_area_struct *dst_vma, - unsigned long dst_addr, - unsigned long src_addr, - struct page **pagep) -{ - return shmem_mfill_atomic_pte(dst_mm, dst_pmd, dst_vma, - dst_addr, src_addr, false, pagep); -} - -int shmem_mfill_zeropage_pte(struct mm_struct *dst_mm, - pmd_t *dst_pmd, - struct vm_area_struct *dst_vma, - unsigned long dst_addr) -{ - struct page *page = NULL; - - return shmem_mfill_atomic_pte(dst_mm, dst_pmd, dst_vma, - dst_addr, 0, true, &page); -} +#endif /* CONFIG_USERFAULTFD */ #ifdef CONFIG_TMPFS static const struct inode_operations shmem_symlink_inode_operations; diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index e14b3820c6a8..23fa2583bbd1 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -440,13 +440,9 @@ static __always_inline ssize_t mfill_atomic_pte(struct mm_struct *dst_mm, dst_vma, dst_addr); } else { VM_WARN_ON_ONCE(wp_copy); - if (!zeropage) - err = shmem_mcopy_atomic_pte(dst_mm, dst_pmd, - dst_vma, dst_addr, - src_addr, page); - else - err = shmem_mfill_zeropage_pte(dst_mm, dst_pmd, - dst_vma, dst_addr); + err = shmem_mcopy_atomic_pte(dst_mm, dst_pmd, dst_vma, + dst_addr, src_addr, zeropage, + page); } return err; From patchwork Tue Apr 20 22:07:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 12215115 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7147FC43462 for ; Tue, 20 Apr 2021 22:08:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4A3196140A for ; Tue, 20 Apr 2021 22:08:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234227AbhDTWIz (ORCPT ); Tue, 20 Apr 2021 18:08:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40324 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234258AbhDTWIs (ORCPT ); Tue, 20 Apr 2021 18:08:48 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 09D6FC06138E for ; Tue, 20 Apr 2021 15:08:15 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 137-20020a250d8f0000b02904e7bf943359so13444516ybn.23 for ; Tue, 20 Apr 2021 15:08:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=DQGQsByibd5yN2vyWTzTceXotmY0k6aoP/1N2N9IuiE=; b=u0L5Qq+KFD/DUNNDpJAJwZr3GYahHPsiA5cWRr+EbF2YkjZGVqFSUnUHu0jIwIm94I dCg+THjA2ozjBNbbJlIR1Q7JLlgkTLKSSDy4z2Y4Fo+HATHTnQmqRKDPZHo/EOc5SEo6 8mtK9EnySkHZ69l42kwrWFe6KqSj0JKSA+bZQTdtEeg7Dm25TvFWAaLfb2CB7998pKvH QEezt0j8RxhGlm9Vvqx4R7kOX/j0zIaWxGFQ1fhGLMcA7BYaGiZ0dnawFv2y4o/NUEtb MdYVjiIBDvZyzhTJnuhKcomLm5lB8kMhiBKHMzZfm0/01OtZOW8eqd6zYsjU2js3uNx2 d8JQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=DQGQsByibd5yN2vyWTzTceXotmY0k6aoP/1N2N9IuiE=; b=DkUv3AP9h0YsNIGQwi4vuk+yVnYGzzPG7Ge0zo7OIGaTtTpRmuGIPlDrYEotdzwone DxTMolvVGF4VvSJQGQGq6cv6R9odqiYTtbl4cQ+ykKzp9k4+xktqzJSKvH64NBBfMiBc 5C/0x34J90uxJ8onf40nR5vFW+EN4ho5b5BCps3NcYDEEUZ8rv2k05bGGLrH38uprPLZ O4I6UMuXCPIzRWsYym213fDQXbn8nOWXTo1IDszEh2ivN5IpWOMTS9SGSZltD2MoBygV dOZeuc9tzlu9fy0K2c3InCwA1QDznxPKXKGt3rt9Vh7AOpYZ46GHSOyONvwYfZTQqOUq XytA== X-Gm-Message-State: AOAM532nlkSUahGtp6Liu5s1lgWR8fyDRfM+QG4Fxb/P2CXQxT7sAToA x/rMjzE8alcGxFtzSsZ/fgjsCxgtiHN1hcMS2YxJ X-Google-Smtp-Source: ABdhPJym7CLvSXYY6OmrT0Ki+DwG8X3Vq/qx2Z8JcjPLcMOks3PzNmaXD7FSPq3/5s36n7xGPfGbhgTbHWygAEttU4M9 X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:c40e:ee2c:2ab8:257a]) (user=axelrasmussen job=sendgmr) by 2002:a25:b78c:: with SMTP id n12mr28077499ybh.291.1618956494265; Tue, 20 Apr 2021 15:08:14 -0700 (PDT) Date: Tue, 20 Apr 2021 15:07:57 -0700 In-Reply-To: <20210420220804.486803-1-axelrasmussen@google.com> Message-Id: <20210420220804.486803-4-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210420220804.486803-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.31.1.368.gbe11c130af-goog Subject: [PATCH v4 03/10] userfaultfd/shmem: support UFFDIO_CONTINUE for shmem From: Axel Rasmussen To: Alexander Viro , Andrea Arcangeli , Andrew Morton , Hugh Dickins , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Kravetz , Mike Rapoport , Peter Xu , Shaohua Li , Shuah Khan , Stephen Rothwell , Wang Qing Cc: linux-api@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, Axel Rasmussen , Brian Geffon , "Dr . David Alan Gilbert" , Mina Almasry , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org With this change, userspace can resolve a minor fault within a shmem-backed area with a UFFDIO_CONTINUE ioctl. The semantics for this match those for hugetlbfs - we look up the existing page in the page cache, and install a PTE for it. This commit introduces a new helper: mcopy_atomic_install_pte. Why handle UFFDIO_CONTINUE for shmem in mm/userfaultfd.c, instead of in shmem.c? The existing userfault implementation only relies on shmem.c for VM_SHARED VMAs. However, minor fault handling / CONTINUE work just fine for !VM_SHARED VMAs as well. We'd prefer to handle CONTINUE for shmem in one place, regardless of shared/private (to reduce code duplication). Why add a new mcopy_atomic_install_pte helper? A problem we have with continue is that shmem_mcopy_atomic_pte() and mcopy_atomic_pte() are *close* to what we want, but not exactly. We do want to setup the PTEs in a CONTINUE operation, but we don't want to e.g. allocate a new page, charge it (e.g. to the shmem inode), manipulate various flags, etc. Also we have the problem stated above: shmem_mcopy_atomic_pte() and mcopy_atomic_pte() both handle one-half of the problem (shared / private) continue cares about. So, introduce mcontinue_atomic_pte(), to handle all of the shmem continue cases. Introduce the helper so it doesn't duplicate code with mcopy_atomic_pte(). In a future commit, shmem_mcopy_atomic_pte() will also be modified to use this new helper. However, since this is a bigger refactor, it seems most clear to do it as a separate change. Signed-off-by: Axel Rasmussen Acked-by: Hugh Dickins --- mm/userfaultfd.c | 172 ++++++++++++++++++++++++++++++++++------------- 1 file changed, 127 insertions(+), 45 deletions(-) diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 23fa2583bbd1..51d8c0127161 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -48,6 +48,83 @@ struct vm_area_struct *find_dst_vma(struct mm_struct *dst_mm, return dst_vma; } +/* + * Install PTEs, to map dst_addr (within dst_vma) to page. + * + * This function handles MCOPY_ATOMIC_CONTINUE (which is always file-backed), + * whether or not dst_vma is VM_SHARED. It also handles the more general + * MCOPY_ATOMIC_NORMAL case, when dst_vma is *not* VM_SHARED (it may be file + * backed, or not). + * + * Note that MCOPY_ATOMIC_NORMAL for a VM_SHARED dst_vma is handled by + * shmem_mcopy_atomic_pte instead. + */ +static int mcopy_atomic_install_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, + struct vm_area_struct *dst_vma, + unsigned long dst_addr, struct page *page, + bool newly_allocated, bool wp_copy) +{ + int ret; + pte_t _dst_pte, *dst_pte; + bool writable = dst_vma->vm_flags & VM_WRITE; + bool vm_shared = dst_vma->vm_flags & VM_SHARED; + bool page_in_cache = page->mapping; + spinlock_t *ptl; + struct inode *inode; + pgoff_t offset, max_off; + + _dst_pte = mk_pte(page, dst_vma->vm_page_prot); + if (page_in_cache && !vm_shared) + writable = false; + if (writable || !page_in_cache) + _dst_pte = pte_mkdirty(_dst_pte); + if (writable) { + if (wp_copy) + _dst_pte = pte_mkuffd_wp(_dst_pte); + else + _dst_pte = pte_mkwrite(_dst_pte); + } + + dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl); + + if (vma_is_shmem(dst_vma)) { + /* serialize against truncate with the page table lock */ + inode = dst_vma->vm_file->f_inode; + offset = linear_page_index(dst_vma, dst_addr); + max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); + ret = -EFAULT; + if (unlikely(offset >= max_off)) + goto out_unlock; + } + + ret = -EEXIST; + if (!pte_none(*dst_pte)) + goto out_unlock; + + if (page_in_cache) + page_add_file_rmap(page, false); + else + page_add_new_anon_rmap(page, dst_vma, dst_addr, false); + + /* + * Must happen after rmap, as mm_counter() checks mapping (via + * PageAnon()), which is set by __page_set_anon_rmap(). + */ + inc_mm_counter(dst_mm, mm_counter(page)); + + if (newly_allocated) + lru_cache_add_inactive_or_unevictable(page, dst_vma); + + set_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte); + + /* No need to invalidate - it was non-present before */ + update_mmu_cache(dst_vma, dst_addr, dst_pte); + ret = 0; +out_unlock: + pte_unmap_unlock(dst_pte, ptl); + return ret; +} + static int mcopy_atomic_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, struct vm_area_struct *dst_vma, @@ -56,13 +133,9 @@ static int mcopy_atomic_pte(struct mm_struct *dst_mm, struct page **pagep, bool wp_copy) { - pte_t _dst_pte, *dst_pte; - spinlock_t *ptl; void *page_kaddr; int ret; struct page *page; - pgoff_t offset, max_off; - struct inode *inode; if (!*pagep) { ret = -ENOMEM; @@ -99,43 +172,12 @@ static int mcopy_atomic_pte(struct mm_struct *dst_mm, if (mem_cgroup_charge(page, dst_mm, GFP_KERNEL)) goto out_release; - _dst_pte = pte_mkdirty(mk_pte(page, dst_vma->vm_page_prot)); - if (dst_vma->vm_flags & VM_WRITE) { - if (wp_copy) - _dst_pte = pte_mkuffd_wp(_dst_pte); - else - _dst_pte = pte_mkwrite(_dst_pte); - } - - dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl); - if (dst_vma->vm_file) { - /* the shmem MAP_PRIVATE case requires checking the i_size */ - inode = dst_vma->vm_file->f_inode; - offset = linear_page_index(dst_vma, dst_addr); - max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); - ret = -EFAULT; - if (unlikely(offset >= max_off)) - goto out_release_uncharge_unlock; - } - ret = -EEXIST; - if (!pte_none(*dst_pte)) - goto out_release_uncharge_unlock; - - inc_mm_counter(dst_mm, MM_ANONPAGES); - page_add_new_anon_rmap(page, dst_vma, dst_addr, false); - lru_cache_add_inactive_or_unevictable(page, dst_vma); - - set_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte); - - /* No need to invalidate - it was non-present before */ - update_mmu_cache(dst_vma, dst_addr, dst_pte); - - pte_unmap_unlock(dst_pte, ptl); - ret = 0; + ret = mcopy_atomic_install_pte(dst_mm, dst_pmd, dst_vma, dst_addr, + page, true, wp_copy); + if (ret) + goto out_release; out: return ret; -out_release_uncharge_unlock: - pte_unmap_unlock(dst_pte, ptl); out_release: put_page(page); goto out; @@ -176,6 +218,41 @@ static int mfill_zeropage_pte(struct mm_struct *dst_mm, return ret; } +/* Handles UFFDIO_CONTINUE for all shmem VMAs (shared or private). */ +static int mcontinue_atomic_pte(struct mm_struct *dst_mm, + pmd_t *dst_pmd, + struct vm_area_struct *dst_vma, + unsigned long dst_addr, + bool wp_copy) +{ + struct inode *inode = file_inode(dst_vma->vm_file); + pgoff_t pgoff = linear_page_index(dst_vma, dst_addr); + struct page *page; + int ret; + + ret = shmem_getpage(inode, pgoff, &page, SGP_READ); + if (ret) + goto out; + if (!page) { + ret = -EFAULT; + goto out; + } + + ret = mcopy_atomic_install_pte(dst_mm, dst_pmd, dst_vma, dst_addr, + page, false, wp_copy); + if (ret) + goto out_release; + + unlock_page(page); + ret = 0; +out: + return ret; +out_release: + unlock_page(page); + put_page(page); + goto out; +} + static pmd_t *mm_alloc_pmd(struct mm_struct *mm, unsigned long address) { pgd_t *pgd; @@ -415,11 +492,16 @@ static __always_inline ssize_t mfill_atomic_pte(struct mm_struct *dst_mm, unsigned long dst_addr, unsigned long src_addr, struct page **page, - bool zeropage, + enum mcopy_atomic_mode mode, bool wp_copy) { ssize_t err; + if (mode == MCOPY_ATOMIC_CONTINUE) { + return mcontinue_atomic_pte(dst_mm, dst_pmd, dst_vma, dst_addr, + wp_copy); + } + /* * The normal page fault path for a shmem will invoke the * fault, fill the hole in the file and COW it right away. The @@ -431,7 +513,7 @@ static __always_inline ssize_t mfill_atomic_pte(struct mm_struct *dst_mm, * and not in the radix tree. */ if (!(dst_vma->vm_flags & VM_SHARED)) { - if (!zeropage) + if (mode == MCOPY_ATOMIC_NORMAL) err = mcopy_atomic_pte(dst_mm, dst_pmd, dst_vma, dst_addr, src_addr, page, wp_copy); @@ -441,7 +523,8 @@ static __always_inline ssize_t mfill_atomic_pte(struct mm_struct *dst_mm, } else { VM_WARN_ON_ONCE(wp_copy); err = shmem_mcopy_atomic_pte(dst_mm, dst_pmd, dst_vma, - dst_addr, src_addr, zeropage, + dst_addr, src_addr, + mode != MCOPY_ATOMIC_NORMAL, page); } @@ -463,7 +546,6 @@ static __always_inline ssize_t __mcopy_atomic(struct mm_struct *dst_mm, long copied; struct page *page; bool wp_copy; - bool zeropage = (mcopy_mode == MCOPY_ATOMIC_ZEROPAGE); /* * Sanitize the command parameters: @@ -526,7 +608,7 @@ static __always_inline ssize_t __mcopy_atomic(struct mm_struct *dst_mm, if (!vma_is_anonymous(dst_vma) && !vma_is_shmem(dst_vma)) goto out_unlock; - if (mcopy_mode == MCOPY_ATOMIC_CONTINUE) + if (!vma_is_shmem(dst_vma) && mcopy_mode == MCOPY_ATOMIC_CONTINUE) goto out_unlock; /* @@ -574,7 +656,7 @@ static __always_inline ssize_t __mcopy_atomic(struct mm_struct *dst_mm, BUG_ON(pmd_trans_huge(*dst_pmd)); err = mfill_atomic_pte(dst_mm, dst_pmd, dst_vma, dst_addr, - src_addr, &page, zeropage, wp_copy); + src_addr, &page, mcopy_mode, wp_copy); cond_resched(); if (unlikely(err == -ENOENT)) { From patchwork Tue Apr 20 22:07:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 12215127 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E820C43462 for ; Tue, 20 Apr 2021 22:10:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5ECFA61401 for ; Tue, 20 Apr 2021 22:10:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234316AbhDTWKv (ORCPT ); Tue, 20 Apr 2021 18:10:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40372 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234186AbhDTWIu (ORCPT ); Tue, 20 Apr 2021 18:08:50 -0400 Received: from mail-qt1-x849.google.com (mail-qt1-x849.google.com [IPv6:2607:f8b0:4864:20::849]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CF40CC06138B for ; Tue, 20 Apr 2021 15:08:16 -0700 (PDT) Received: by mail-qt1-x849.google.com with SMTP id g21-20020ac858150000b02901ba6163708bso2098192qtg.5 for ; Tue, 20 Apr 2021 15:08:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=QUApq8UPhLnVOskfaex0nbVtMhAbaCtXI1eHuCuwzK0=; b=go415UuMQZc9Zin/DWljWc8XS2D43ct53uUZqRvif920L9paMAYqU8DAomFyOgMzKy BASyq6I6UBR+/DPIvFLjEmsDFvBoeoZ8ZRqAX2l59ua7fWNbLvGvrhRxeGBTRF0jfteS 70u3jBHIPKb/9Fp7miWXNnTGQ+RuVwC3KWRNfByHS7cGBdjkut5Ve2iijXJfv/YyRKEd DFUg/i50NODl/vWSwTyUkQezOd3vv3pPewiisUw3YwuocrFoFn/VnpTq0HEB1N7vAnIO z3HzvKXFWBQIEl7E2RHB32Ym368rCk4ECwgyz7C+Todyzdc2OTqQCeObDg2ncNld7X6n 3ZPA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=QUApq8UPhLnVOskfaex0nbVtMhAbaCtXI1eHuCuwzK0=; b=oRm30TKCxEdausbQmQXtiJeY/taOSPc6lNWB2mDBXg2BXJ+A/mNd/K44tlnd3wRGs2 NoNbjEk6OoN6guACcUvq0g40xxd5lNLEQSKZ358CcRF1Am3w5rJ3XvVwtGQeeZlavYgK u+RmERnaCdL/wBuZ9TDpr167lI14rpr+dH5c5OAFT0a93q07pwElSJO5qlkKxzNXX80b USktnPmEBRVuWMEKP+VtOgkdkgIerCgGL2pCqJfXzDv+ZS/JplIlxhCnlNEpPo1oEClc C8rOanptZjTREUg2uXmbUToaOV7JlZ93QhCetZuQQufH+Hg7U+4K2Jc1keFukn1ptC+g ChOg== X-Gm-Message-State: AOAM5339RPm1MdNhn/oDbadLYgKU+SHQhgPZlVrQwIMIqiIRpKn+A/jM MdxLgsRcjfSldIlD0nn5seI8LvWSQEQle+xDVM5q X-Google-Smtp-Source: ABdhPJysgqqNiiBX50a+ZoqyZ0kCGo4tvrGC3ywuggc7gqC2iDuh1na9Bbn6xDQ61bim3z6mT/0+24eR3Btt/RkfuRrA X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:c40e:ee2c:2ab8:257a]) (user=axelrasmussen job=sendgmr) by 2002:a05:6214:18d:: with SMTP id q13mr4532886qvr.60.1618956496024; Tue, 20 Apr 2021 15:08:16 -0700 (PDT) Date: Tue, 20 Apr 2021 15:07:58 -0700 In-Reply-To: <20210420220804.486803-1-axelrasmussen@google.com> Message-Id: <20210420220804.486803-5-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210420220804.486803-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.31.1.368.gbe11c130af-goog Subject: [PATCH v4 04/10] userfaultfd/shmem: support minor fault registration for shmem From: Axel Rasmussen To: Alexander Viro , Andrea Arcangeli , Andrew Morton , Hugh Dickins , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Kravetz , Mike Rapoport , Peter Xu , Shaohua Li , Shuah Khan , Stephen Rothwell , Wang Qing Cc: linux-api@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, Axel Rasmussen , Brian Geffon , "Dr . David Alan Gilbert" , Mina Almasry , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This patch allows shmem-backed VMAs to be registered for minor faults. Minor faults are appropriately relayed to userspace in the fault path, for VMAs with the relevant flag. This commit doesn't hook up the UFFDIO_CONTINUE ioctl for shmem-backed minor faults, though, so userspace doesn't yet have a way to resolve such faults. Acked-by: Peter Xu Signed-off-by: Axel Rasmussen Acked-by: Hugh Dickins --- fs/userfaultfd.c | 6 +++--- include/uapi/linux/userfaultfd.h | 7 ++++++- mm/memory.c | 8 +++++--- mm/shmem.c | 12 +++++++++++- 4 files changed, 25 insertions(+), 8 deletions(-) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 14f92285d04f..9f3b8684cf3c 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -1267,8 +1267,7 @@ static inline bool vma_can_userfault(struct vm_area_struct *vma, } if (vm_flags & VM_UFFD_MINOR) { - /* FIXME: Add minor fault interception for shmem. */ - if (!is_vm_hugetlb_page(vma)) + if (!(is_vm_hugetlb_page(vma) || vma_is_shmem(vma))) return false; } @@ -1941,7 +1940,8 @@ static int userfaultfd_api(struct userfaultfd_ctx *ctx, /* report all available features and ioctls to userland */ uffdio_api.features = UFFD_API_FEATURES; #ifndef CONFIG_HAVE_ARCH_USERFAULTFD_MINOR - uffdio_api.features &= ~UFFD_FEATURE_MINOR_HUGETLBFS; + uffdio_api.features &= + ~(UFFD_FEATURE_MINOR_HUGETLBFS | UFFD_FEATURE_MINOR_SHMEM); #endif uffdio_api.ioctls = UFFD_API_IOCTLS; ret = -EFAULT; diff --git a/include/uapi/linux/userfaultfd.h b/include/uapi/linux/userfaultfd.h index bafbeb1a2624..159a74e9564f 100644 --- a/include/uapi/linux/userfaultfd.h +++ b/include/uapi/linux/userfaultfd.h @@ -31,7 +31,8 @@ UFFD_FEATURE_MISSING_SHMEM | \ UFFD_FEATURE_SIGBUS | \ UFFD_FEATURE_THREAD_ID | \ - UFFD_FEATURE_MINOR_HUGETLBFS) + UFFD_FEATURE_MINOR_HUGETLBFS | \ + UFFD_FEATURE_MINOR_SHMEM) #define UFFD_API_IOCTLS \ ((__u64)1 << _UFFDIO_REGISTER | \ (__u64)1 << _UFFDIO_UNREGISTER | \ @@ -185,6 +186,9 @@ struct uffdio_api { * UFFD_FEATURE_MINOR_HUGETLBFS indicates that minor faults * can be intercepted (via REGISTER_MODE_MINOR) for * hugetlbfs-backed pages. + * + * UFFD_FEATURE_MINOR_SHMEM indicates the same support as + * UFFD_FEATURE_MINOR_HUGETLBFS, but for shmem-backed pages instead. */ #define UFFD_FEATURE_PAGEFAULT_FLAG_WP (1<<0) #define UFFD_FEATURE_EVENT_FORK (1<<1) @@ -196,6 +200,7 @@ struct uffdio_api { #define UFFD_FEATURE_SIGBUS (1<<7) #define UFFD_FEATURE_THREAD_ID (1<<8) #define UFFD_FEATURE_MINOR_HUGETLBFS (1<<9) +#define UFFD_FEATURE_MINOR_SHMEM (1<<10) __u64 features; __u64 ioctls; diff --git a/mm/memory.c b/mm/memory.c index 4e358601c5d6..cc71a445c76c 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3972,9 +3972,11 @@ static vm_fault_t do_read_fault(struct vm_fault *vmf) * something). */ if (vma->vm_ops->map_pages && fault_around_bytes >> PAGE_SHIFT > 1) { - ret = do_fault_around(vmf); - if (ret) - return ret; + if (likely(!userfaultfd_minor(vmf->vma))) { + ret = do_fault_around(vmf); + if (ret) + return ret; + } } ret = __do_fault(vmf); diff --git a/mm/shmem.c b/mm/shmem.c index b72c55aa07fc..30c0bb501dc9 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1785,7 +1785,7 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index, * vm. If we swap it in we mark it dirty since we also free the swap * entry since a page cannot live in both the swap and page cache. * - * vmf and fault_type are only supplied by shmem_fault: + * vma, vmf, and fault_type are only supplied by shmem_fault: * otherwise they are NULL. */ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, @@ -1820,6 +1820,16 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, page = pagecache_get_page(mapping, index, FGP_ENTRY | FGP_HEAD | FGP_LOCK, 0); + + if (page && vma && userfaultfd_minor(vma)) { + if (!xa_is_value(page)) { + unlock_page(page); + put_page(page); + } + *fault_type = handle_userfault(vmf, VM_UFFD_MINOR); + return 0; + } + if (xa_is_value(page)) { error = shmem_swapin_page(inode, index, &page, sgp, gfp, vma, fault_type); From patchwork Tue Apr 20 22:07:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 12215117 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 198ACC43461 for ; Tue, 20 Apr 2021 22:08:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D349661407 for ; Tue, 20 Apr 2021 22:08:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234259AbhDTWI5 (ORCPT ); Tue, 20 Apr 2021 18:08:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40386 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234280AbhDTWIx (ORCPT ); Tue, 20 Apr 2021 18:08:53 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ABEDAC061343 for ; Tue, 20 Apr 2021 15:08:18 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id n11-20020a25808b0000b02904d9818b80e8so13313443ybk.14 for ; Tue, 20 Apr 2021 15:08:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=q0uullLY89k5tjNP5bU3tD7YqwSbU41Jf2SRzlEhX2Q=; b=sDY5fiUuvC0Rx6q5+VtW2pUXtXPN28Dpcq442NoR4o5Q7ncPbrml5nRuyZyd2xyYJO zduDulUJHI6EWSx9HKsTUErWdKEUuFYZr+xR9rJNfE/H6mkaRUzEAjms+9w6kvxskGj1 Nur1rZu+ynGqrWgwuUXixlMLcKGLtjKwC07MFESZpxI3GRbWslF6J9bleAcqYL/P3N0L ZVDSqYoudAuN+pF5eP6HhajJbpFGdoc4dcwVz8YQcclCijhrbExCMgqfkGczNNfmjgwI IH7UY9zoBuqr4U01GJqpSd7amvWJNhrFWpBlHU26toc+qlCBox4xow+ryia3+5CyH0zV a8lA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=q0uullLY89k5tjNP5bU3tD7YqwSbU41Jf2SRzlEhX2Q=; b=Bt8FyMNxzyhVXZLmXfpSGHwfOd+ZMQ7oDsnc+GfoMoCX6k8MQBWwuaBVU/EthqOXbE WqkiEZDcFdPFKr3L2ziMc1eLPjEWhz6m9zhV5j3IgMjySl0PgFBlh0tpungaxITUF0hD 2DyStaw1WSg559lnpcKty9UXOKrPyZBhvpLB2+qZ1XcJwJDbYYmjYf1EwBVUZ84oKeuw XZGhMDeAhfGWa4IOe0tQPNYZePb6irhPcHOuhuv+dJjAY8MDCd+eMLB6tTU1jzuJO+x9 yFYoLwx+KAAiiwvjAxw/qJUhsxeQoembMheD5mbgNGPVY9xP4ix4WT7uZzulBnsuMAzB OL5A== X-Gm-Message-State: AOAM533wkm6/S7gt0/8NZoPt9aUxXnkQWtk3q3i7klHJCaVMPWL5VfNI 5shqy7E3Ef5JwyMlBxFv1+qwZeEGYPN+a0svby3c X-Google-Smtp-Source: ABdhPJyA40/+Xy2TbPk+Aikm9mHvgdzpsA+t59tWYmD21etDLRSCFrGCxDix1NluigV9XQ6hkCLRuykY3gFHdMw/G/jI X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:c40e:ee2c:2ab8:257a]) (user=axelrasmussen job=sendgmr) by 2002:a25:b983:: with SMTP id r3mr28386146ybg.238.1618956497914; Tue, 20 Apr 2021 15:08:17 -0700 (PDT) Date: Tue, 20 Apr 2021 15:07:59 -0700 In-Reply-To: <20210420220804.486803-1-axelrasmussen@google.com> Message-Id: <20210420220804.486803-6-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210420220804.486803-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.31.1.368.gbe11c130af-goog Subject: [PATCH v4 05/10] userfaultfd/selftests: use memfd_create for shmem test type From: Axel Rasmussen To: Alexander Viro , Andrea Arcangeli , Andrew Morton , Hugh Dickins , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Kravetz , Mike Rapoport , Peter Xu , Shaohua Li , Shuah Khan , Stephen Rothwell , Wang Qing Cc: linux-api@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, Axel Rasmussen , Brian Geffon , "Dr . David Alan Gilbert" , Mina Almasry , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This is a preparatory commit. In the future, we want to be able to setup alias mappings for area_src and area_dst in the shmem test, like we do in the hugetlb_shared test. With a VMA obtained via mmap(MAP_ANONYMOUS | MAP_SHARED), it isn't clear how to do this. So, mmap() with an fd, so we can create alias mappings. Use memfd_create instead of actually passing in a tmpfs path like hugetlb does, since it's more convenient / simpler to run, and works just as well. Future commits will: 1. Setup the alias mappings. 2. Extend our tests to actually take advantage of this, to test new userfaultfd behavior being introduced in this series. Also, a small fix in the area we're changing: when the hugetlb setup fails in main(), pass in the right argv[] so we actually print out the hugetlb file path. Reviewed-by: Peter Xu Signed-off-by: Axel Rasmussen --- tools/testing/selftests/vm/userfaultfd.c | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/vm/userfaultfd.c b/tools/testing/selftests/vm/userfaultfd.c index 6339aeaeeff8..fc40831f818f 100644 --- a/tools/testing/selftests/vm/userfaultfd.c +++ b/tools/testing/selftests/vm/userfaultfd.c @@ -85,6 +85,7 @@ static bool test_uffdio_wp = false; static bool test_uffdio_minor = false; static bool map_shared; +static int shm_fd; static int huge_fd; static char *huge_fd_off0; static unsigned long long *count_verify; @@ -277,8 +278,11 @@ static void shmem_release_pages(char *rel_area) static void shmem_allocate_area(void **alloc_area) { + unsigned long offset = + alloc_area == (void **)&area_src ? 0 : nr_pages * page_size; + *alloc_area = mmap(NULL, nr_pages * page_size, PROT_READ | PROT_WRITE, - MAP_ANONYMOUS | MAP_SHARED, -1, 0); + MAP_SHARED, shm_fd, offset); if (*alloc_area == MAP_FAILED) err("mmap of memfd failed"); } @@ -1448,6 +1452,16 @@ int main(int argc, char **argv) err("Open of %s failed", argv[4]); if (ftruncate(huge_fd, 0)) err("ftruncate %s to size 0 failed", argv[4]); + } else if (test_type == TEST_SHMEM) { + shm_fd = memfd_create(argv[0], 0); + if (shm_fd < 0) + err("memfd_create"); + if (ftruncate(shm_fd, nr_pages * page_size * 2)) + err("ftruncate"); + if (fallocate(shm_fd, + FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, 0, + nr_pages * page_size * 2)) + err("fallocate"); } printf("nr_pages: %lu, nr_pages_per_cpu: %lu\n", nr_pages, nr_pages_per_cpu); From patchwork Tue Apr 20 22:08:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 12215129 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D097C43470 for ; Tue, 20 Apr 2021 22:10:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 68622613E6 for ; Tue, 20 Apr 2021 22:10:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234321AbhDTWKw (ORCPT ); Tue, 20 Apr 2021 18:10:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40398 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234277AbhDTWIx (ORCPT ); Tue, 20 Apr 2021 18:08:53 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A8EF5C061345 for ; Tue, 20 Apr 2021 15:08:20 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id z8-20020a2566480000b02904e0f6f67f42so13574997ybm.15 for ; Tue, 20 Apr 2021 15:08:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=kyyFN9AtGZG7sxPS2jLvKjxCRLZ+illjJVxTiN8fbrw=; b=JMFOJKQD6VWL9XWjUHGhpO9LBqTgy+Ns6fTSXa2N2FAVemnw5VhmoB8JVGyLKXRMoP 2G0wdlLBgdOhhK5YQ4CSnnXRozuBjZZcnXpkq6Q7qi9RjHZX57x01gxijHUwYLiy5CPv eZOhPdwcNIQDcKY4LVYGz9rmDpo3AI21KJqv2M2PFmKqLgMg0CFV1rH55+7lNFzi/hNG 0aKGqwvkAagADR6EWdoAt9UMb2LAN8yei0RbDGmaJPGEn24BE7e20Y4iUGKusAy3Sk0A qVVk2k/ceUjOT+szFnzRtbQL0hxobpqxT/CVfwYY34M4ctoIWzzco4mrUAXOe1tEoLj9 SM4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=kyyFN9AtGZG7sxPS2jLvKjxCRLZ+illjJVxTiN8fbrw=; b=Oc28r6cLbuaG8F6MbrF5BteTkrs/rKot+v11xe8MDTpcBvvujUwKvWAxT3ajF0HBpa SLnJY8Z+VB6vtrtyPBilCcjJhe4wFoHqhA0flJeaGYg70PJvfsIGK/wI1QrSf9u/4j7Q KWbhU95+tnyHsHWkKzCfujDhE5b/L9a9fcb697Fmu5rFg/t8kdC/QfXajfs+rNifzoyh w5iRWhokWzKMgDRFrws0/0r06l6oqnRalm77Fuja+MPhdepPDN29bWYOZ+mlCnukmK+G 6503HJzuAqnzwb6VpodeVQG1+hiDsEvDWUDNTz3RSEKKNVmFm+jSPlE5tcwNRcBqv6MY 6IKw== X-Gm-Message-State: AOAM530n0C0vXHjFUh/uFXZWStKx4ONxBdOy2nnaKZclVVctNjxQ5f7Y oBgtMRPzlyIw75ibOohUqZH03ANJZ/K/h9K/aCeH X-Google-Smtp-Source: ABdhPJzOQ0vCMNocHti08k8y48bjZn+N/2oUus8q+tA8xkgknLfFG8icRrTM906qaAXAFK5MEo2EA5AqfEAfnZrS1QUz X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:c40e:ee2c:2ab8:257a]) (user=axelrasmussen job=sendgmr) by 2002:a25:ce41:: with SMTP id x62mr26398558ybe.402.1618956499844; Tue, 20 Apr 2021 15:08:19 -0700 (PDT) Date: Tue, 20 Apr 2021 15:08:00 -0700 In-Reply-To: <20210420220804.486803-1-axelrasmussen@google.com> Message-Id: <20210420220804.486803-7-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210420220804.486803-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.31.1.368.gbe11c130af-goog Subject: [PATCH v4 06/10] userfaultfd/selftests: create alias mappings in the shmem test From: Axel Rasmussen To: Alexander Viro , Andrea Arcangeli , Andrew Morton , Hugh Dickins , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Kravetz , Mike Rapoport , Peter Xu , Shaohua Li , Shuah Khan , Stephen Rothwell , Wang Qing Cc: linux-api@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, Axel Rasmussen , Brian Geffon , "Dr . David Alan Gilbert" , Mina Almasry , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Previously, we just allocated two shm areas: area_src and area_dst. With this commit, change this so we also allocate area_src_alias, and area_dst_alias. area_*_alias and area_* (respectively) point to the same underlying physical pages, but are different VMAs. In a future commit in this series, we'll leverage this setup to exercise minor fault handling support for shmem, just like we do in the hugetlb_shared test. Reviewed-by: Peter Xu Signed-off-by: Axel Rasmussen --- tools/testing/selftests/vm/userfaultfd.c | 22 +++++++++++++++++++--- 1 file changed, 19 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/vm/userfaultfd.c b/tools/testing/selftests/vm/userfaultfd.c index fc40831f818f..1f65c4ab7994 100644 --- a/tools/testing/selftests/vm/userfaultfd.c +++ b/tools/testing/selftests/vm/userfaultfd.c @@ -278,13 +278,29 @@ static void shmem_release_pages(char *rel_area) static void shmem_allocate_area(void **alloc_area) { - unsigned long offset = - alloc_area == (void **)&area_src ? 0 : nr_pages * page_size; + void *area_alias = NULL; + bool is_src = alloc_area == (void **)&area_src; + unsigned long offset = is_src ? 0 : nr_pages * page_size; *alloc_area = mmap(NULL, nr_pages * page_size, PROT_READ | PROT_WRITE, MAP_SHARED, shm_fd, offset); if (*alloc_area == MAP_FAILED) err("mmap of memfd failed"); + + area_alias = mmap(NULL, nr_pages * page_size, PROT_READ | PROT_WRITE, + MAP_SHARED, shm_fd, offset); + if (area_alias == MAP_FAILED) + err("mmap of memfd alias failed"); + + if (is_src) + area_src_alias = area_alias; + else + area_dst_alias = area_alias; +} + +static void shmem_alias_mapping(__u64 *start, size_t len, unsigned long offset) +{ + *start = (unsigned long)area_dst_alias + offset; } struct uffd_test_ops { @@ -314,7 +330,7 @@ static struct uffd_test_ops shmem_uffd_test_ops = { .expected_ioctls = SHMEM_EXPECTED_IOCTLS, .allocate_area = shmem_allocate_area, .release_pages = shmem_release_pages, - .alias_mapping = noop_alias_mapping, + .alias_mapping = shmem_alias_mapping, }; static struct uffd_test_ops hugetlb_uffd_test_ops = { From patchwork Tue Apr 20 22:08:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 12215119 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6DC03C43461 for ; Tue, 20 Apr 2021 22:08:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 35FC061400 for ; Tue, 20 Apr 2021 22:08:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234347AbhDTWJG (ORCPT ); Tue, 20 Apr 2021 18:09:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40374 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234306AbhDTWIz (ORCPT ); Tue, 20 Apr 2021 18:08:55 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 69D87C06138B for ; Tue, 20 Apr 2021 15:08:22 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id c1-20020a5b0bc10000b02904e7c6399b20so13434916ybr.12 for ; Tue, 20 Apr 2021 15:08:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=JzTVGZ23fIrylr2XIKhqY641xTnyCDfA+KvZuGCEgRI=; b=MavEvrKuEQatbMQrhgSOEKJ0LdjmgV8fvehXxnG6YxPGD0mp63+clcxaQphLEu1rom 3+XE5Z2HZzKlM9VFQYTiFElOTJxtShLSg/2WboVP79O5zz7d+n+HCNhcv5GaNgCnCqbi Q7wacwJxAQod8GT6nGiQ4QEF0YxVnZQtIifA2Jm7mxx0tB+X2HkHOt0s5YOAZpAE6m6l npY8hn4Gvnsy1s3rPjyRMUlk4ptHCuaEwoztuq8PPU4+IWVj+mPkoBSs47rCgzomfA+E FSEWV5BiWkSFSBSu0UJxkLUyiaYUNBgAIlG5ELE+3fMyBBEcwX4lJNsToL4qK9cnslMx b9sw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=JzTVGZ23fIrylr2XIKhqY641xTnyCDfA+KvZuGCEgRI=; b=N2pS4N1BveS7ST49zvOVhB6FkUz2rxaSU5DE5iimiKHuL66pIQlD6jx3WSyKsZXzjN LDNms/7g5WXeLBhI6dbam5nNagsDfm4EfwLMfojHlhl4e3Dvz/dGygOPDaQSRcDCSfkj dXfJWtRo54h+eJ/In/N+zBrPBq+3ihvJs5p4FPOEGFWclEmeaofC+Q0uNNAdbGzwQCnS GjdK7n1kIYe5UxbPGcWdFi4pD0SV8wLTTUzR4aiicpmw/Aj97tfJmRhh56cVCOJecuEg i5eK+3qFG0A2JZX8YTjKaJDu2g2cVRbC2GfdOv5cy4ON0rf3ovMhGbpIixBAwHQ6d0VF jl6w== X-Gm-Message-State: AOAM530oNL7+58ZcQvzXPNcxM9eNyG42pz7am190AyQEJvhwiAlLOrk3 sQRSeUA3JTpdNgYHzzwE/XG9eDS0AMBa2HdXL652 X-Google-Smtp-Source: ABdhPJz8PENAiBUnmTR1bV+8HaAXu78+dgmwSP/xGi72OKtkgb8mrm+5gR3j8ariKf9aU8cTFQZuvQ4+AJsiHDlMTmlx X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:c40e:ee2c:2ab8:257a]) (user=axelrasmussen job=sendgmr) by 2002:a25:1883:: with SMTP id 125mr27293602yby.465.1618956501665; Tue, 20 Apr 2021 15:08:21 -0700 (PDT) Date: Tue, 20 Apr 2021 15:08:01 -0700 In-Reply-To: <20210420220804.486803-1-axelrasmussen@google.com> Message-Id: <20210420220804.486803-8-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210420220804.486803-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.31.1.368.gbe11c130af-goog Subject: [PATCH v4 07/10] userfaultfd/selftests: reinitialize test context in each test From: Axel Rasmussen To: Alexander Viro , Andrea Arcangeli , Andrew Morton , Hugh Dickins , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Kravetz , Mike Rapoport , Peter Xu , Shaohua Li , Shuah Khan , Stephen Rothwell , Wang Qing Cc: linux-api@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, Axel Rasmussen , Brian Geffon , "Dr . David Alan Gilbert" , Mina Almasry , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Currently, the context (fds, mmap-ed areas, etc.) are global. Each test mutates this state in some way, in some cases really "clobbering it" (e.g., the events test mremap-ing area_dst over the top of area_src, or the minor faults tests overwriting the count_verify values in the test areas). We run the tests in a particular order, each test is careful to make the right assumptions about its starting state, etc. But, this is fragile. It's better for a test's success or failure to not depend on what some other prior test case did to the global state. To that end, clear and reinitialize the test context at the start of each test case, so whatever prior test cases did doesn't affect future tests. This is particularly relevant to this series because the events test's mremap of area_dst screws up assumptions the minor fault test was relying on. This wasn't a problem for hugetlb, as we don't mremap in that case. Signed-off-by: Axel Rasmussen --- tools/testing/selftests/vm/userfaultfd.c | 215 ++++++++++++----------- 1 file changed, 116 insertions(+), 99 deletions(-) diff --git a/tools/testing/selftests/vm/userfaultfd.c b/tools/testing/selftests/vm/userfaultfd.c index 1f65c4ab7994..3fbc69f513dc 100644 --- a/tools/testing/selftests/vm/userfaultfd.c +++ b/tools/testing/selftests/vm/userfaultfd.c @@ -89,7 +89,8 @@ static int shm_fd; static int huge_fd; static char *huge_fd_off0; static unsigned long long *count_verify; -static int uffd, uffd_flags, finished, *pipefd; +static int uffd = -1; +static int uffd_flags, finished, *pipefd; static char *area_src, *area_src_alias, *area_dst, *area_dst_alias; static char *zeropage; pthread_attr_t attr; @@ -342,6 +343,111 @@ static struct uffd_test_ops hugetlb_uffd_test_ops = { static struct uffd_test_ops *uffd_test_ops; +static void userfaultfd_open(uint64_t *features) +{ + struct uffdio_api uffdio_api; + + uffd = syscall(__NR_userfaultfd, O_CLOEXEC | O_NONBLOCK | UFFD_USER_MODE_ONLY); + if (uffd < 0) + err("userfaultfd syscall not available in this kernel"); + uffd_flags = fcntl(uffd, F_GETFD, NULL); + + uffdio_api.api = UFFD_API; + uffdio_api.features = *features; + if (ioctl(uffd, UFFDIO_API, &uffdio_api)) + err("UFFDIO_API failed.\nPlease make sure to " + "run with either root or ptrace capability."); + if (uffdio_api.api != UFFD_API) + err("UFFDIO_API error: %" PRIu64, (uint64_t)uffdio_api.api); + + *features = uffdio_api.features; +} + +static inline void munmap_area(void **area) +{ + if (*area) + if (munmap(*area, nr_pages * page_size)) + err("munmap"); + + *area = NULL; +} + +static void uffd_test_ctx_clear(void) +{ + size_t i; + + if (pipefd) { + for (i = 0; i < nr_cpus * 2; ++i) { + if (close(pipefd[i])) + err("close pipefd"); + } + free(pipefd); + pipefd = NULL; + } + + if (count_verify) { + free(count_verify); + count_verify = NULL; + } + + if (uffd != -1) { + if (close(uffd)) + err("close uffd"); + uffd = -1; + } + + huge_fd_off0 = NULL; + munmap_area((void **)&area_src); + munmap_area((void **)&area_src_alias); + munmap_area((void **)&area_dst); + munmap_area((void **)&area_dst_alias); +} + +static void uffd_test_ctx_init_ext(uint64_t *features) +{ + unsigned long nr, cpu; + + uffd_test_ctx_clear(); + + uffd_test_ops->allocate_area((void **)&area_src); + uffd_test_ops->allocate_area((void **)&area_dst); + + uffd_test_ops->release_pages(area_src); + uffd_test_ops->release_pages(area_dst); + + userfaultfd_open(features); + + count_verify = malloc(nr_pages * sizeof(unsigned long long)); + if (!count_verify) + err("count_verify"); + + for (nr = 0; nr < nr_pages; nr++) { + *area_mutex(area_src, nr) = + (pthread_mutex_t)PTHREAD_MUTEX_INITIALIZER; + count_verify[nr] = *area_count(area_src, nr) = 1; + /* + * In the transition between 255 to 256, powerpc will + * read out of order in my_bcmp and see both bytes as + * zero, so leave a placeholder below always non-zero + * after the count, to avoid my_bcmp to trigger false + * positives. + */ + *(area_count(area_src, nr) + 1) = 1; + } + + pipefd = malloc(sizeof(int) * nr_cpus * 2); + if (!pipefd) + err("pipefd"); + for (cpu = 0; cpu < nr_cpus; cpu++) + if (pipe2(&pipefd[cpu * 2], O_CLOEXEC | O_NONBLOCK)) + err("pipe"); +} + +static inline void uffd_test_ctx_init(uint64_t features) +{ + uffd_test_ctx_init_ext(&features); +} + static int my_bcmp(char *str1, char *str2, size_t n) { unsigned long i; @@ -726,40 +832,6 @@ static int stress(struct uffd_stats *uffd_stats) return 0; } -static int userfaultfd_open_ext(uint64_t *features) -{ - struct uffdio_api uffdio_api; - - uffd = syscall(__NR_userfaultfd, O_CLOEXEC | O_NONBLOCK | UFFD_USER_MODE_ONLY); - if (uffd < 0) { - fprintf(stderr, - "userfaultfd syscall not available in this kernel\n"); - return 1; - } - uffd_flags = fcntl(uffd, F_GETFD, NULL); - - uffdio_api.api = UFFD_API; - uffdio_api.features = *features; - if (ioctl(uffd, UFFDIO_API, &uffdio_api)) { - fprintf(stderr, "UFFDIO_API failed.\nPlease make sure to " - "run with either root or ptrace capability.\n"); - return 1; - } - if (uffdio_api.api != UFFD_API) { - fprintf(stderr, "UFFDIO_API error: %" PRIu64 "\n", - (uint64_t)uffdio_api.api); - return 1; - } - - *features = uffdio_api.features; - return 0; -} - -static int userfaultfd_open(uint64_t features) -{ - return userfaultfd_open_ext(&features); -} - sigjmp_buf jbuf, *sigbuf; static void sighndl(int sig, siginfo_t *siginfo, void *ptr) @@ -868,6 +940,8 @@ static int faulting_process(int signal_test) MREMAP_MAYMOVE | MREMAP_FIXED, area_src); if (area_dst == MAP_FAILED) err("mremap"); + /* Reset area_src since we just clobbered it */ + area_src = NULL; for (; nr < nr_pages; nr++) { count = *area_count(area_dst, nr); @@ -961,10 +1035,8 @@ static int userfaultfd_zeropage_test(void) printf("testing UFFDIO_ZEROPAGE: "); fflush(stdout); - uffd_test_ops->release_pages(area_dst); + uffd_test_ctx_init(0); - if (userfaultfd_open(0)) - return 1; uffdio_register.range.start = (unsigned long) area_dst; uffdio_register.range.len = nr_pages * page_size; uffdio_register.mode = UFFDIO_REGISTER_MODE_MISSING; @@ -981,7 +1053,6 @@ static int userfaultfd_zeropage_test(void) if (my_bcmp(area_dst, zeropage, page_size)) err("zeropage is not zero"); - close(uffd); printf("done.\n"); return 0; } @@ -999,12 +1070,10 @@ static int userfaultfd_events_test(void) printf("testing events (fork, remap, remove): "); fflush(stdout); - uffd_test_ops->release_pages(area_dst); - features = UFFD_FEATURE_EVENT_FORK | UFFD_FEATURE_EVENT_REMAP | UFFD_FEATURE_EVENT_REMOVE; - if (userfaultfd_open(features)) - return 1; + uffd_test_ctx_init(features); + fcntl(uffd, F_SETFL, uffd_flags | O_NONBLOCK); uffdio_register.range.start = (unsigned long) area_dst; @@ -1037,8 +1106,6 @@ static int userfaultfd_events_test(void) if (pthread_join(uffd_mon, NULL)) return 1; - close(uffd); - uffd_stats_report(&stats, 1); return stats.missing_faults != nr_pages; @@ -1058,11 +1125,9 @@ static int userfaultfd_sig_test(void) printf("testing signal delivery: "); fflush(stdout); - uffd_test_ops->release_pages(area_dst); - features = UFFD_FEATURE_EVENT_FORK|UFFD_FEATURE_SIGBUS; - if (userfaultfd_open(features)) - return 1; + uffd_test_ctx_init(features); + fcntl(uffd, F_SETFL, uffd_flags | O_NONBLOCK); uffdio_register.range.start = (unsigned long) area_dst; @@ -1103,7 +1168,6 @@ static int userfaultfd_sig_test(void) printf("done.\n"); if (userfaults) err("Signal test failed, userfaults: %ld", userfaults); - close(uffd); return userfaults != 0; } @@ -1126,10 +1190,7 @@ static int userfaultfd_minor_test(void) printf("testing minor faults: "); fflush(stdout); - uffd_test_ops->release_pages(area_dst); - - if (userfaultfd_open_ext(&features)) - return 1; + uffd_test_ctx_init_ext(&features); /* If kernel reports the feature isn't supported, skip the test. */ if (!(features & UFFD_FEATURE_MINOR_HUGETLBFS)) { printf("skipping test due to lack of feature support\n"); @@ -1183,8 +1244,6 @@ static int userfaultfd_minor_test(void) if (pthread_join(uffd_mon, NULL)) return 1; - close(uffd); - uffd_stats_report(&stats, 1); return stats.missing_faults != 0 || stats.minor_faults != nr_pages; @@ -1196,50 +1255,9 @@ static int userfaultfd_stress(void) char *tmp_area; unsigned long nr; struct uffdio_register uffdio_register; - unsigned long cpu; struct uffd_stats uffd_stats[nr_cpus]; - uffd_test_ops->allocate_area((void **)&area_src); - if (!area_src) - return 1; - uffd_test_ops->allocate_area((void **)&area_dst); - if (!area_dst) - return 1; - - if (userfaultfd_open(0)) - return 1; - - count_verify = malloc(nr_pages * sizeof(unsigned long long)); - if (!count_verify) { - perror("count_verify"); - return 1; - } - - for (nr = 0; nr < nr_pages; nr++) { - *area_mutex(area_src, nr) = (pthread_mutex_t) - PTHREAD_MUTEX_INITIALIZER; - count_verify[nr] = *area_count(area_src, nr) = 1; - /* - * In the transition between 255 to 256, powerpc will - * read out of order in my_bcmp and see both bytes as - * zero, so leave a placeholder below always non-zero - * after the count, to avoid my_bcmp to trigger false - * positives. - */ - *(area_count(area_src, nr) + 1) = 1; - } - - pipefd = malloc(sizeof(int) * nr_cpus * 2); - if (!pipefd) { - perror("pipefd"); - return 1; - } - for (cpu = 0; cpu < nr_cpus; cpu++) { - if (pipe2(&pipefd[cpu*2], O_CLOEXEC | O_NONBLOCK)) { - perror("pipe"); - return 1; - } - } + uffd_test_ctx_init(0); if (posix_memalign(&area, page_size, page_size)) err("out of memory"); @@ -1360,7 +1378,6 @@ static int userfaultfd_stress(void) uffd_stats_report(uffd_stats, nr_cpus); } - close(uffd); return userfaultfd_zeropage_test() || userfaultfd_sig_test() || userfaultfd_events_test() || userfaultfd_minor_test(); } From patchwork Tue Apr 20 22:08:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 12215121 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 754EAC43461 for ; Tue, 20 Apr 2021 22:08:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4C2B9613E6 for ; Tue, 20 Apr 2021 22:08:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234388AbhDTWJN (ORCPT ); Tue, 20 Apr 2021 18:09:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40420 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234327AbhDTWJA (ORCPT ); Tue, 20 Apr 2021 18:09:00 -0400 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 44833C06174A for ; Tue, 20 Apr 2021 15:08:24 -0700 (PDT) Received: by mail-qk1-x749.google.com with SMTP id h22-20020a05620a13f6b02902e3e9aad4bdso4004334qkl.14 for ; Tue, 20 Apr 2021 15:08:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=NPeuLMJJBun5hHaf/B7onZQg8F0X8wxMw6hgTnGDv34=; b=ZreUEMGX5w46ttG+DVBFu8u1yQcYHgC00EnXT+xD6FaeUxYRrZP/60cA7/0Noho0dU i0SVSFKQZsfa+6G9feTiogYQyxP1TNNWCtDd3TDjF1lH5UYppSg6lLHqvf1y2xMzLiqw dFrUFCqFTn/HL60soYFWayK9JwpCCIi3TZoX6vBPCn8k8GHmtK7uTavC2gGYfc/H6woj uzNvo8KGW5r6v6n8duFxibIs33FwMdL7rGkKyjethciITgma7juCPWhsOErHcZ7XilTz /uf20aIdqN0FlsbiuXvqy/29Vzbwb4Q31UjqyR1RHaAhC39f6UktbqNrEwKhihjEWMg6 /YAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=NPeuLMJJBun5hHaf/B7onZQg8F0X8wxMw6hgTnGDv34=; b=MOeviEEUUJhtBdNxKq0JnhJQpPthFiZ6Yp99/U2BXwJEdvkf6qAOA6FaOgRQS9OVMV F3Cp2aabusXB9kd6HHd7ek67I91U2gklQcepPZkZNomSWbgvPbH3TLu81ygTxdN0i2pf ggle+vKHU9A8s66zCusoMRu4dsGuGbVrD0p2X/ru8wWlpEyqdgq0NsOZ6/XN8G14BBvt VpOCXRa19ek7hSvgxee5ym+ee3JYAQeeUeASAtcFzW080R3/8aZNyYV0YK+D4+J4saN5 cDDFpRnQ4aBCmfkZ6dmy/xmatioLWGAD+Jn3dYiF51VEFCU53Mf7mPRWJKBcOqItX+GS rlFA== X-Gm-Message-State: AOAM532SZMZF/vSG0YvtMQumK52X/Ef1FQliZyAvf+H/8LQO+NymbXqN KTI4DCRkYHNh49I61+MEb+YsRT7TZo4rHDayvanh X-Google-Smtp-Source: ABdhPJzmLXY1CzgdycVXq5x5HQh17y9fyx4hPJa6xeszPxn2QpA4blLC5JBOHQejRGCpxAcCwNDVIcT2LW63TClsNauJ X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:c40e:ee2c:2ab8:257a]) (user=axelrasmussen job=sendgmr) by 2002:ad4:4944:: with SMTP id o4mr29064674qvy.18.1618956503394; Tue, 20 Apr 2021 15:08:23 -0700 (PDT) Date: Tue, 20 Apr 2021 15:08:02 -0700 In-Reply-To: <20210420220804.486803-1-axelrasmussen@google.com> Message-Id: <20210420220804.486803-9-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210420220804.486803-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.31.1.368.gbe11c130af-goog Subject: [PATCH v4 08/10] userfaultfd/selftests: exercise minor fault handling shmem support From: Axel Rasmussen To: Alexander Viro , Andrea Arcangeli , Andrew Morton , Hugh Dickins , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Kravetz , Mike Rapoport , Peter Xu , Shaohua Li , Shuah Khan , Stephen Rothwell , Wang Qing Cc: linux-api@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, Axel Rasmussen , Brian Geffon , "Dr . David Alan Gilbert" , Mina Almasry , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Enable test_uffdio_minor for test_type == TEST_SHMEM, and modify the test slightly to pass in / check for the right feature flags. Signed-off-by: Axel Rasmussen --- tools/testing/selftests/vm/userfaultfd.c | 29 ++++++++++++++++++++---- 1 file changed, 25 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/vm/userfaultfd.c b/tools/testing/selftests/vm/userfaultfd.c index 3fbc69f513dc..a7ecc9993439 100644 --- a/tools/testing/selftests/vm/userfaultfd.c +++ b/tools/testing/selftests/vm/userfaultfd.c @@ -474,6 +474,7 @@ static void wp_range(int ufd, __u64 start, __u64 len, bool wp) static void continue_range(int ufd, __u64 start, __u64 len) { struct uffdio_continue req; + int ret; req.range.start = start; req.range.len = len; @@ -482,6 +483,17 @@ static void continue_range(int ufd, __u64 start, __u64 len) if (ioctl(ufd, UFFDIO_CONTINUE, &req)) err("UFFDIO_CONTINUE failed for address 0x%" PRIx64, (uint64_t)start); + + /* + * Error handling within the kernel for continue is subtly different + * from copy or zeropage, so it may be a source of bugs. Trigger an + * error (-EEXIST) on purpose, to verify doing so doesn't cause a BUG. + */ + req.mapped = 0; + ret = ioctl(ufd, UFFDIO_CONTINUE, &req); + if (ret >= 0 || req.mapped != -EEXIST) + err("failed to exercise UFFDIO_CONTINUE error handling, ret=%d, mapped=%" PRId64, + ret, (int64_t) req.mapped); } static void *locking_thread(void *arg) @@ -1182,7 +1194,7 @@ static int userfaultfd_minor_test(void) void *expected_page; char c; struct uffd_stats stats = { 0 }; - uint64_t features = UFFD_FEATURE_MINOR_HUGETLBFS; + uint64_t req_features, features_out; if (!test_uffdio_minor) return 0; @@ -1190,9 +1202,17 @@ static int userfaultfd_minor_test(void) printf("testing minor faults: "); fflush(stdout); - uffd_test_ctx_init_ext(&features); - /* If kernel reports the feature isn't supported, skip the test. */ - if (!(features & UFFD_FEATURE_MINOR_HUGETLBFS)) { + if (test_type == TEST_HUGETLB) + req_features = UFFD_FEATURE_MINOR_HUGETLBFS; + else if (test_type == TEST_SHMEM) + req_features = UFFD_FEATURE_MINOR_SHMEM; + else + return 1; + + features_out = req_features; + uffd_test_ctx_init_ext(&features_out); + /* If kernel reports required features aren't supported, skip test. */ + if ((features_out & req_features) != req_features) { printf("skipping test due to lack of feature support\n"); fflush(stdout); return 0; @@ -1426,6 +1446,7 @@ static void set_test_type(const char *type) map_shared = true; test_type = TEST_SHMEM; uffd_test_ops = &shmem_uffd_test_ops; + test_uffdio_minor = true; } else { err("Unknown test type: %s", type); } From patchwork Tue Apr 20 22:08:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 12215123 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7524CC433B4 for ; Tue, 20 Apr 2021 22:08:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 48CF3613FA for ; Tue, 20 Apr 2021 22:08:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234320AbhDTWJO (ORCPT ); Tue, 20 Apr 2021 18:09:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40432 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234325AbhDTWJA (ORCPT ); Tue, 20 Apr 2021 18:09:00 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 08696C061344 for ; Tue, 20 Apr 2021 15:08:25 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id s34-20020a252d620000b02904e34d3a48abso13364804ybe.13 for ; Tue, 20 Apr 2021 15:08:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=vAl+5kPY6gQdy5J8CiiEmjqDhCXyHe3j8a5QynQlo+0=; b=cFRBCIerJ2KGHw+EXxrttvNQXCxEu42FdSHezdkvgUcN9+46CKRU32wVwHGgvfoITW eyfdf/4pDqZQq16I1On2hTodS+yLs8KRdNGlfoQ+WURHdQVWeqpIZsjYLLHIoKeTybr7 czag3QNkHywVcXocuRyimSJD9VtRH2RtsaVBrUSy9whrglHM+1qsrGtjhM8EDpwc6xcD dMhbTj79g8AyFukZ4kK4X1dxGQECMPUT8fMmrm+cG65OepcBgYvPK+FZaASc3SPzg3+9 9vDcr8SP2Zx1U+xequJF/dQ1XNhNJRfMycOeImdCwxni6DyZ23isuiRXd3nuHmdVl4bx essA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=vAl+5kPY6gQdy5J8CiiEmjqDhCXyHe3j8a5QynQlo+0=; b=g2ew7uCqv282I5ioaBl4vVrVJpNHNKzGUWIb0BLYLkgwMIyEjxIAbjf2RjMr6V504G Dh7WStUDGaUvj6QMmx3zAI+cCrmv1qvfBzg/tlUF6kmBA2wujQCuBtbsFS2/1QDt3OxV RCBvjKdQxInJiIkpBb+C/vlKygVZYVKzGEqnHuxxRmdfvdhzhad0xuvMWEmXS344/GL+ qtkRo8DLR8Xi/HIl+WyI3EXqAJa4Je8vMIeJr7SF1YhMjbvQo1w1WJjneS3Ey652ygIR CgPltiwLs5VIuAXhvX1mGh4yFOQR3G9nMBEZoFQeKWC+FkxoLA8n6DOED2Wq01FfaDuB GMjw== X-Gm-Message-State: AOAM530Xlbfe8e8Y9BTGItyJw1g4Sd8KwtK33hDKDsCTntLt4fdmktm7 nMmTdc7v9m3mnKuVECGYNrC9yNnJEbj0RWiMOf0l X-Google-Smtp-Source: ABdhPJwelR+IY4W49IEvd/z2ZtKkzYKfsEUnbcZ0FS9g6/7FwQ3WaiJYlyanZA/JGCaBD/pfvJWgvN1YLUtkk0+aEfYd X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:c40e:ee2c:2ab8:257a]) (user=axelrasmussen job=sendgmr) by 2002:a25:dca:: with SMTP id 193mr28730769ybn.434.1618956505213; Tue, 20 Apr 2021 15:08:25 -0700 (PDT) Date: Tue, 20 Apr 2021 15:08:03 -0700 In-Reply-To: <20210420220804.486803-1-axelrasmussen@google.com> Message-Id: <20210420220804.486803-10-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210420220804.486803-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.31.1.368.gbe11c130af-goog Subject: [PATCH v4 09/10] userfaultfd/shmem: modify shmem_mcopy_atomic_pte to use install_pte() From: Axel Rasmussen To: Alexander Viro , Andrea Arcangeli , Andrew Morton , Hugh Dickins , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Kravetz , Mike Rapoport , Peter Xu , Shaohua Li , Shuah Khan , Stephen Rothwell , Wang Qing Cc: linux-api@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, Axel Rasmussen , Brian Geffon , "Dr . David Alan Gilbert" , Mina Almasry , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org In a previous commit, we added the mcopy_atomic_install_pte() helper. This helper does the job of setting up PTEs for an existing page, to map it into a given VMA. It deals with both the anon and shmem cases, as well as the shared and private cases. In other words, shmem_mcopy_atomic_pte() duplicates a case it already handles. So, expose it, and let shmem_mcopy_atomic_pte() use it directly, to reduce code duplication. This requires that we refactor shmem_mcopy_atomic_pte() a bit: Instead of doing accounting (shmem_recalc_inode() et al) part-way through the PTE setup, do it beforehand. This frees up mcopy_atomic_install_pte() from having to care about this accounting, but it does mean we need to clean it up if we get a failure afterwards (shmem_uncharge()). We can *almost* use shmem_charge() to do this, reducing code duplication. But, it does `inode->i_mapping->nrpages++`, which would double-count since shmem_add_to_page_cache() also does this. Signed-off-by: Axel Rasmussen --- include/linux/userfaultfd_k.h | 5 ++++ mm/shmem.c | 53 ++++++++--------------------------- mm/userfaultfd.c | 17 ++++------- 3 files changed, 22 insertions(+), 53 deletions(-) diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h index 794d1538b8ba..39c094cc6641 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -53,6 +53,11 @@ enum mcopy_atomic_mode { MCOPY_ATOMIC_CONTINUE, }; +extern int mcopy_atomic_install_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, + struct vm_area_struct *dst_vma, + unsigned long dst_addr, struct page *page, + bool newly_allocated, bool wp_copy); + extern ssize_t mcopy_atomic(struct mm_struct *dst_mm, unsigned long dst_start, unsigned long src_start, unsigned long len, bool *mmap_changing, __u64 mode); diff --git a/mm/shmem.c b/mm/shmem.c index 30c0bb501dc9..9bfa80fcd414 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2378,10 +2378,8 @@ int shmem_mcopy_atomic_pte(struct mm_struct *dst_mm, struct address_space *mapping = inode->i_mapping; gfp_t gfp = mapping_gfp_mask(mapping); pgoff_t pgoff = linear_page_index(dst_vma, dst_addr); - spinlock_t *ptl; void *page_kaddr; struct page *page; - pte_t _dst_pte, *dst_pte; int ret; pgoff_t max_off; @@ -2391,8 +2389,10 @@ int shmem_mcopy_atomic_pte(struct mm_struct *dst_mm, if (!*pagep) { page = shmem_alloc_page(gfp, info, pgoff); - if (!page) - goto out_unacct_blocks; + if (!page) { + shmem_inode_unacct_blocks(inode, 1); + goto out; + } if (!zeropage) { /* COPY */ page_kaddr = kmap_atomic(page); @@ -2432,59 +2432,28 @@ int shmem_mcopy_atomic_pte(struct mm_struct *dst_mm, if (ret) goto out_release; - _dst_pte = mk_pte(page, dst_vma->vm_page_prot); - if (dst_vma->vm_flags & VM_WRITE) - _dst_pte = pte_mkwrite(pte_mkdirty(_dst_pte)); - else { - /* - * We don't set the pte dirty if the vma has no - * VM_WRITE permission, so mark the page dirty or it - * could be freed from under us. We could do it - * unconditionally before unlock_page(), but doing it - * only if VM_WRITE is not set is faster. - */ - set_page_dirty(page); - } - - dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl); - - ret = -EFAULT; - max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); - if (unlikely(pgoff >= max_off)) - goto out_release_unlock; - - ret = -EEXIST; - if (!pte_none(*dst_pte)) - goto out_release_unlock; - - lru_cache_add(page); - spin_lock_irq(&info->lock); info->alloced++; inode->i_blocks += BLOCKS_PER_PAGE; shmem_recalc_inode(inode); spin_unlock_irq(&info->lock); - inc_mm_counter(dst_mm, mm_counter_file(page)); - page_add_file_rmap(page, false); - set_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte); + ret = mcopy_atomic_install_pte(dst_mm, dst_pmd, dst_vma, dst_addr, + page, true, false); + if (ret) + goto out_release_uncharge; - /* No need to invalidate - it was non-present before */ - update_mmu_cache(dst_vma, dst_addr, dst_pte); - pte_unmap_unlock(dst_pte, ptl); + SetPageDirty(page); unlock_page(page); ret = 0; out: return ret; -out_release_unlock: - pte_unmap_unlock(dst_pte, ptl); - ClearPageDirty(page); +out_release_uncharge: delete_from_page_cache(page); + shmem_uncharge(inode, 1); out_release: unlock_page(page); put_page(page); -out_unacct_blocks: - shmem_inode_unacct_blocks(inode, 1); goto out; } #endif /* CONFIG_USERFAULTFD */ diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 51d8c0127161..3a9ddbb2dbbd 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -51,18 +51,13 @@ struct vm_area_struct *find_dst_vma(struct mm_struct *dst_mm, /* * Install PTEs, to map dst_addr (within dst_vma) to page. * - * This function handles MCOPY_ATOMIC_CONTINUE (which is always file-backed), - * whether or not dst_vma is VM_SHARED. It also handles the more general - * MCOPY_ATOMIC_NORMAL case, when dst_vma is *not* VM_SHARED (it may be file - * backed, or not). - * - * Note that MCOPY_ATOMIC_NORMAL for a VM_SHARED dst_vma is handled by - * shmem_mcopy_atomic_pte instead. + * This function handles both MCOPY_ATOMIC_NORMAL and _CONTINUE for both shmem + * and anon, and for both shared and private VMAs. */ -static int mcopy_atomic_install_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, - struct vm_area_struct *dst_vma, - unsigned long dst_addr, struct page *page, - bool newly_allocated, bool wp_copy) +int mcopy_atomic_install_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, + struct vm_area_struct *dst_vma, + unsigned long dst_addr, struct page *page, + bool newly_allocated, bool wp_copy) { int ret; pte_t _dst_pte, *dst_pte; From patchwork Tue Apr 20 22:08:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 12215125 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78EA3C4360C for ; Tue, 20 Apr 2021 22:08:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 49650613E6 for ; Tue, 20 Apr 2021 22:08:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234398AbhDTWJP (ORCPT ); Tue, 20 Apr 2021 18:09:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40448 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234330AbhDTWJC (ORCPT ); Tue, 20 Apr 2021 18:09:02 -0400 Received: from mail-qv1-xf49.google.com (mail-qv1-xf49.google.com [IPv6:2607:f8b0:4864:20::f49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EA45FC061348 for ; Tue, 20 Apr 2021 15:08:27 -0700 (PDT) Received: by mail-qv1-xf49.google.com with SMTP id l61-20020a0c84430000b02901a9a7e363edso4147406qva.16 for ; Tue, 20 Apr 2021 15:08:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=+mbkINT23NQwVcWMsgi99GnmaMWJZnHq8mB1mAGG8Zs=; b=po9ArG7ZdtzYVDcv5vyMGKJeuPtnghxwSIn+gUjtsSZPemrE/klwSLC5Hm14ZzmVBr J4fsL1TqZkclrXJQjUYboxz/RZII5pyYDh6HnkqPk414pWLjG4kpolNiR9s75do6g2Pr DpY3NvwNL7a+rT74p+SLBJs1f45c6PET88l3inft/kpGjTYvvhQIjy7gAGRUbiwr4Fv2 2Q7p3yMlGf9aUzi7Tio8EON2fR6Vu88aq+LcqtbIUmOgkVbX/B3Am1bivGu/zj7wXnBW aW6henJINq/oy28fXQsFp37HYoat5FM6j15ZtXY1rxjN5oKqtEd6dQFDNqVQnNPT/ZSQ kwZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=+mbkINT23NQwVcWMsgi99GnmaMWJZnHq8mB1mAGG8Zs=; b=Huz20MbrTJR2KyD7t/1fjWBTlvlkrzesXQ4JL9l+M/J72iC/bUF8UkleYRp1LVXVY1 OnVtpOHXqzRhTHhjmcWs4uGl7fcpgaCDwmCYr7Gh4wxJoKo5fqyK1mJ9dDIrYnbLJGXx UcuUbYTqTzA/j1GC3e0RQzpfdhA9p7EWAe3CrrlJrViSbhpDkOqXQGTFMROWHstPBcH6 uUL8IrFnCTBlH/tcfsOZ6CPZUWs/6s6rtomY+14cFp7f1gLVr7+SyzpW1crZABY5vOXX HnJZpgwsU9eoBpHg3hD1cbUPPs2MqkJrqE1S4hmbd/b154Rv3CNObfeokRXs6u/U59sY ScuA== X-Gm-Message-State: AOAM533Ju5vu2oJGhbGg5wItsSJ+CJFg+33kQNKSSk8PJfrUl7wUDjgt 9aazbf0PZSXzPmcWZTngywaf+iDVPjUyiUZcwulu X-Google-Smtp-Source: ABdhPJzDCW1ZTwiCo7QhfK6grPFLercdg6lvY5/FYqbdhC5uCIc0YWN0hcRTy+dnp5ILgkSMhBf0eQ1k/vGiUVro3wEN X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:c40e:ee2c:2ab8:257a]) (user=axelrasmussen job=sendgmr) by 2002:ad4:4944:: with SMTP id o4mr29064929qvy.18.1618956506891; Tue, 20 Apr 2021 15:08:26 -0700 (PDT) Date: Tue, 20 Apr 2021 15:08:04 -0700 In-Reply-To: <20210420220804.486803-1-axelrasmussen@google.com> Message-Id: <20210420220804.486803-11-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210420220804.486803-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.31.1.368.gbe11c130af-goog Subject: [PATCH v4 10/10] userfaultfd: update documentation to mention shmem minor faults From: Axel Rasmussen To: Alexander Viro , Andrea Arcangeli , Andrew Morton , Hugh Dickins , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Kravetz , Mike Rapoport , Peter Xu , Shaohua Li , Shuah Khan , Stephen Rothwell , Wang Qing Cc: linux-api@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, Axel Rasmussen , Brian Geffon , "Dr . David Alan Gilbert" , Mina Almasry , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Generally, the documentation we wrote for hugetlbfs-based minor faults still all applies. The only missing piece is to mention the new feature flag which indicates that the kernel supports this for shmem as well. Signed-off-by: Axel Rasmussen Acked-by: Hugh Dickins --- Documentation/admin-guide/mm/userfaultfd.rst | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/Documentation/admin-guide/mm/userfaultfd.rst b/Documentation/admin-guide/mm/userfaultfd.rst index 3aa38e8b8361..6528036093e1 100644 --- a/Documentation/admin-guide/mm/userfaultfd.rst +++ b/Documentation/admin-guide/mm/userfaultfd.rst @@ -77,7 +77,8 @@ events, except page fault notifications, may be generated: - ``UFFD_FEATURE_MINOR_HUGETLBFS`` indicates that the kernel supports ``UFFDIO_REGISTER_MODE_MINOR`` registration for hugetlbfs virtual memory - areas. + areas. ``UFFD_FEATURE_MINOR_SHMEM`` is the analogous feature indicating + support for shmem virtual memory areas. The userland application should set the feature flags it intends to use when invoking the ``UFFDIO_API`` ioctl, to request that those features be