From patchwork Tue Mar 2 00:01:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 12111487 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.5 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, UNWANTED_LANGUAGE_BODY,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EF893C433E0 for ; Tue, 2 Mar 2021 12:01:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8F65864F16 for ; Tue, 2 Mar 2021 12:01:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349867AbhCBLkg (ORCPT ); Tue, 2 Mar 2021 06:40:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49590 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344295AbhCBACU (ORCPT ); Mon, 1 Mar 2021 19:02:20 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C8015C061794 for ; Mon, 1 Mar 2021 16:01:39 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id 194so20638671ybl.5 for ; Mon, 01 Mar 2021 16:01:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=dD5XxAo1TNCxSI8SsVA5uLei0U8rDJcBWJ9lgXD0vXs=; b=PIJAd1ibfG2Fo7fK+94rkkoKAHs6uyWzEXU0ItZOrXUgPlYgEXSv0E2W+BW/YhZKVz WOK0ezQH6Np91g7US+KLcZKVjs7yBWfGga8iem45uImhklzUZnU6hrYOIvVB4NDmopgy rAseR2RbdQtkWWM8lGmGHoTY6FQFYQtxntEoh2knwImA3sKEumkFf2bppPG/C/ARmIey UEEeiLwPBKMDOFTioQIK2owcVbOFgkX9cW1IUAhp87SyNuv5zJZcuHFCHSdUKVcmT23N zMyvn7dVLBGy4DHWAv50qPPzpfax3WrbbxmsDtYcNxUglg3X8ceZ6WIcJteIPoaBvCAF 1ZAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=dD5XxAo1TNCxSI8SsVA5uLei0U8rDJcBWJ9lgXD0vXs=; b=jURORwsjeQttMl1JxtsizdGcnMxGrNDF8BYgFzfUIetL4QZc0iT4SeIdXwvgIdQ1Cc KIxICRZvMIzP6o5mSbxAMN7+jfoaoH3W1h2ETtGFEcDzIxhDwq+uK1mlLoYI+V40LxDr Yd8qUgxQgCH37MXA0Xu29/9v+k5zacDx8sVVOSVraPDU9qcuW2nDXnOIf2TRUiJbXFX8 M77+eWjag39oBJMMShe1jPPvXFLJsoAbnPBhXwDIK3CJMxQwRFz7p5Z5A7zDEwk7HM6z WmDw2uopSalOlySPikW/nIdNCqtdHFDusvYsJl83y0cNmzLypvv2fP9/0W7+jHdN126E X/sg== X-Gm-Message-State: AOAM5314IIZcMVljgPslQjIYxIdr+GnYFYqj5QkyNfKCthYMu+16nJOx g4c6X2tCz/t1O1M7sQCTL0z+xSPWpqPVP2Vnm/2c X-Google-Smtp-Source: ABdhPJxZtFs+QB3S/qudrbvoa9yAKFvT4TUnb+q/3wsKHGJHzTzmrPEC5DM75gH7P53BfzFAG7Poea900G19iILmeGd4 Sender: "axelrasmussen via sendgmr" X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:1998:8165:ca50:ab8d]) (user=axelrasmussen job=sendgmr) by 2002:a25:ac4e:: with SMTP id r14mr26659441ybd.340.1614643299058; Mon, 01 Mar 2021 16:01:39 -0800 (PST) Date: Mon, 1 Mar 2021 16:01:29 -0800 In-Reply-To: <20210302000133.272579-1-axelrasmussen@google.com> Message-Id: <20210302000133.272579-2-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210302000133.272579-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH v2 1/5] userfaultfd: support minor fault handling for shmem From: Axel Rasmussen To: Alexander Viro , Andrea Arcangeli , Andrew Morton , Hugh Dickins , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Rapoport , Peter Xu , Shaohua Li , Shuah Khan , Wang Qing Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, Axel Rasmussen , Brian Geffon , Cannon Matthews , "Dr . David Alan Gilbert" , David Rientjes , Michel Lespinasse , Mina Almasry , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Modify the userfaultfd register API to allow registering shmem VMAs in minor mode. Modify the shmem mcopy implementation to support UFFDIO_CONTINUE in order to resolve such faults. Combine the shmem mcopy handler functions into a single shmem_mcopy_atomic_pte, which takes a mode parameter. This matches how the hugetlbfs implementation is structured, and lets us remove a good chunk of boilerplate. Signed-off-by: Axel Rasmussen --- fs/userfaultfd.c | 6 +-- include/linux/shmem_fs.h | 26 ++++----- include/uapi/linux/userfaultfd.h | 4 +- mm/memory.c | 8 +-- mm/shmem.c | 92 +++++++++++++++----------------- mm/userfaultfd.c | 27 +++++----- 6 files changed, 79 insertions(+), 84 deletions(-) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 14f92285d04f..9f3b8684cf3c 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -1267,8 +1267,7 @@ static inline bool vma_can_userfault(struct vm_area_struct *vma, } if (vm_flags & VM_UFFD_MINOR) { - /* FIXME: Add minor fault interception for shmem. */ - if (!is_vm_hugetlb_page(vma)) + if (!(is_vm_hugetlb_page(vma) || vma_is_shmem(vma))) return false; } @@ -1941,7 +1940,8 @@ static int userfaultfd_api(struct userfaultfd_ctx *ctx, /* report all available features and ioctls to userland */ uffdio_api.features = UFFD_API_FEATURES; #ifndef CONFIG_HAVE_ARCH_USERFAULTFD_MINOR - uffdio_api.features &= ~UFFD_FEATURE_MINOR_HUGETLBFS; + uffdio_api.features &= + ~(UFFD_FEATURE_MINOR_HUGETLBFS | UFFD_FEATURE_MINOR_SHMEM); #endif uffdio_api.ioctls = UFFD_API_IOCTLS; ret = -EFAULT; diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h index d82b6f396588..f0919c3722e7 100644 --- a/include/linux/shmem_fs.h +++ b/include/linux/shmem_fs.h @@ -9,6 +9,7 @@ #include #include #include +#include /* inode in-kernel data */ @@ -122,21 +123,16 @@ static inline bool shmem_file(struct file *file) extern bool shmem_charge(struct inode *inode, long pages); extern void shmem_uncharge(struct inode *inode, long pages); +#ifdef CONFIG_USERFAULTFD #ifdef CONFIG_SHMEM -extern int shmem_mcopy_atomic_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, - struct vm_area_struct *dst_vma, - unsigned long dst_addr, - unsigned long src_addr, - struct page **pagep); -extern int shmem_mfill_zeropage_pte(struct mm_struct *dst_mm, - pmd_t *dst_pmd, - struct vm_area_struct *dst_vma, - unsigned long dst_addr); -#else -#define shmem_mcopy_atomic_pte(dst_mm, dst_pte, dst_vma, dst_addr, \ - src_addr, pagep) ({ BUG(); 0; }) -#define shmem_mfill_zeropage_pte(dst_mm, dst_pmd, dst_vma, \ - dst_addr) ({ BUG(); 0; }) -#endif +int shmem_mcopy_atomic_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, + struct vm_area_struct *dst_vma, + unsigned long dst_addr, unsigned long src_addr, + enum mcopy_atomic_mode mode, struct page **pagep); +#else /* !CONFIG_SHMEM */ +#define shmem_mcopy_atomic_pte(dst_mm, dst_pmd, dst_vma, dst_addr, \ + src_addr, mode, pagep) ({ BUG(); 0; }) +#endif /* CONFIG_SHMEM */ +#endif /* CONFIG_USERFAULTFD */ #endif diff --git a/include/uapi/linux/userfaultfd.h b/include/uapi/linux/userfaultfd.h index bafbeb1a2624..47d9790d863d 100644 --- a/include/uapi/linux/userfaultfd.h +++ b/include/uapi/linux/userfaultfd.h @@ -31,7 +31,8 @@ UFFD_FEATURE_MISSING_SHMEM | \ UFFD_FEATURE_SIGBUS | \ UFFD_FEATURE_THREAD_ID | \ - UFFD_FEATURE_MINOR_HUGETLBFS) + UFFD_FEATURE_MINOR_HUGETLBFS | \ + UFFD_FEATURE_MINOR_SHMEM) #define UFFD_API_IOCTLS \ ((__u64)1 << _UFFDIO_REGISTER | \ (__u64)1 << _UFFDIO_UNREGISTER | \ @@ -196,6 +197,7 @@ struct uffdio_api { #define UFFD_FEATURE_SIGBUS (1<<7) #define UFFD_FEATURE_THREAD_ID (1<<8) #define UFFD_FEATURE_MINOR_HUGETLBFS (1<<9) +#define UFFD_FEATURE_MINOR_SHMEM (1<<10) __u64 features; __u64 ioctls; diff --git a/mm/memory.c b/mm/memory.c index c8e357627318..a1e5ff55027e 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3929,9 +3929,11 @@ static vm_fault_t do_read_fault(struct vm_fault *vmf) * something). */ if (vma->vm_ops->map_pages && fault_around_bytes >> PAGE_SHIFT > 1) { - ret = do_fault_around(vmf); - if (ret) - return ret; + if (likely(!userfaultfd_minor(vmf->vma))) { + ret = do_fault_around(vmf); + if (ret) + return ret; + } } ret = __do_fault(vmf); diff --git a/mm/shmem.c b/mm/shmem.c index b2db4ed0fbc7..6f81259fabb3 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -77,7 +77,6 @@ static struct vfsmount *shm_mnt; #include #include #include -#include #include #include @@ -1785,8 +1784,8 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index, * vm. If we swap it in we mark it dirty since we also free the swap * entry since a page cannot live in both the swap and page cache. * - * vmf and fault_type are only supplied by shmem_fault: - * otherwise they are NULL. + * vma, vmf, and fault_type are only supplied by shmem_fault: otherwise they + * are NULL. */ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, struct page **pagep, enum sgp_type sgp, gfp_t gfp, @@ -1830,6 +1829,12 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, return error; } + if (page && vma && userfaultfd_minor(vma)) { + unlock_page(page); + *fault_type = handle_userfault(vmf, VM_UFFD_MINOR); + return 0; + } + if (page) hindex = page->index; if (page && sgp == SGP_WRITE) @@ -2354,14 +2359,12 @@ static struct inode *shmem_get_inode(struct super_block *sb, const struct inode return inode; } -static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, - pmd_t *dst_pmd, - struct vm_area_struct *dst_vma, - unsigned long dst_addr, - unsigned long src_addr, - bool zeropage, - struct page **pagep) +int shmem_mcopy_atomic_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, + struct vm_area_struct *dst_vma, + unsigned long dst_addr, unsigned long src_addr, + enum mcopy_atomic_mode mode, struct page **pagep) { + bool is_continue = (mode == MCOPY_ATOMIC_CONTINUE); struct inode *inode = file_inode(dst_vma->vm_file); struct shmem_inode_info *info = SHMEM_I(inode); struct address_space *mapping = inode->i_mapping; @@ -2378,12 +2381,17 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, if (!shmem_inode_acct_block(inode, 1)) goto out; - if (!*pagep) { + if (is_continue) { + ret = -EFAULT; + page = find_lock_page(mapping, pgoff); + if (!page) + goto out_unacct_blocks; + } else if (!*pagep) { page = shmem_alloc_page(gfp, info, pgoff); if (!page) goto out_unacct_blocks; - if (!zeropage) { /* mcopy_atomic */ + if (mode == MCOPY_ATOMIC_NORMAL) { /* mcopy_atomic */ page_kaddr = kmap_atomic(page); ret = copy_from_user(page_kaddr, (const void __user *)src_addr, @@ -2397,7 +2405,7 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, /* don't free the page */ return -ENOENT; } - } else { /* mfill_zeropage_atomic */ + } else { /* zeropage */ clear_highpage(page); } } else { @@ -2405,10 +2413,13 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, *pagep = NULL; } - VM_BUG_ON(PageLocked(page) || PageSwapBacked(page)); - __SetPageLocked(page); - __SetPageSwapBacked(page); - __SetPageUptodate(page); + if (!is_continue) { + VM_BUG_ON(PageSwapBacked(page)); + VM_BUG_ON(PageLocked(page)); + __SetPageLocked(page); + __SetPageSwapBacked(page); + __SetPageUptodate(page); + } ret = -EFAULT; offset = linear_page_index(dst_vma, dst_addr); @@ -2416,10 +2427,13 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, if (unlikely(offset >= max_off)) goto out_release; - ret = shmem_add_to_page_cache(page, mapping, pgoff, NULL, - gfp & GFP_RECLAIM_MASK, dst_mm); - if (ret) - goto out_release; + /* If page wasn't already in the page cache, add it. */ + if (!is_continue) { + ret = shmem_add_to_page_cache(page, mapping, pgoff, NULL, + gfp & GFP_RECLAIM_MASK, dst_mm); + if (ret) + goto out_release; + } _dst_pte = mk_pte(page, dst_vma->vm_page_prot); if (dst_vma->vm_flags & VM_WRITE) @@ -2446,13 +2460,15 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, if (!pte_none(*dst_pte)) goto out_release_unlock; - lru_cache_add(page); + if (!is_continue) { + lru_cache_add(page); - spin_lock_irq(&info->lock); - info->alloced++; - inode->i_blocks += BLOCKS_PER_PAGE; - shmem_recalc_inode(inode); - spin_unlock_irq(&info->lock); + spin_lock_irq(&info->lock); + info->alloced++; + inode->i_blocks += BLOCKS_PER_PAGE; + shmem_recalc_inode(inode); + spin_unlock_irq(&info->lock); + } inc_mm_counter(dst_mm, mm_counter_file(page)); page_add_file_rmap(page, false); @@ -2477,28 +2493,6 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, goto out; } -int shmem_mcopy_atomic_pte(struct mm_struct *dst_mm, - pmd_t *dst_pmd, - struct vm_area_struct *dst_vma, - unsigned long dst_addr, - unsigned long src_addr, - struct page **pagep) -{ - return shmem_mfill_atomic_pte(dst_mm, dst_pmd, dst_vma, - dst_addr, src_addr, false, pagep); -} - -int shmem_mfill_zeropage_pte(struct mm_struct *dst_mm, - pmd_t *dst_pmd, - struct vm_area_struct *dst_vma, - unsigned long dst_addr) -{ - struct page *page = NULL; - - return shmem_mfill_atomic_pte(dst_mm, dst_pmd, dst_vma, - dst_addr, 0, true, &page); -} - #ifdef CONFIG_TMPFS static const struct inode_operations shmem_symlink_inode_operations; static const struct inode_operations shmem_short_symlink_operations; diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index ce6cb4760d2c..6cd7ab531aec 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -415,7 +415,7 @@ static __always_inline ssize_t mfill_atomic_pte(struct mm_struct *dst_mm, unsigned long dst_addr, unsigned long src_addr, struct page **page, - bool zeropage, + enum mcopy_atomic_mode mode, bool wp_copy) { ssize_t err; @@ -431,22 +431,24 @@ static __always_inline ssize_t mfill_atomic_pte(struct mm_struct *dst_mm, * and not in the radix tree. */ if (!(dst_vma->vm_flags & VM_SHARED)) { - if (!zeropage) + switch (mode) { + case MCOPY_ATOMIC_NORMAL: err = mcopy_atomic_pte(dst_mm, dst_pmd, dst_vma, dst_addr, src_addr, page, wp_copy); - else + break; + case MCOPY_ATOMIC_ZEROPAGE: err = mfill_zeropage_pte(dst_mm, dst_pmd, dst_vma, dst_addr); + break; + case MCOPY_ATOMIC_CONTINUE: + err = -EINVAL; + break; + } } else { VM_WARN_ON_ONCE(wp_copy); - if (!zeropage) - err = shmem_mcopy_atomic_pte(dst_mm, dst_pmd, - dst_vma, dst_addr, - src_addr, page); - else - err = shmem_mfill_zeropage_pte(dst_mm, dst_pmd, - dst_vma, dst_addr); + err = shmem_mcopy_atomic_pte(dst_mm, dst_pmd, dst_vma, dst_addr, + src_addr, mode, page); } return err; @@ -467,7 +469,6 @@ static __always_inline ssize_t __mcopy_atomic(struct mm_struct *dst_mm, long copied; struct page *page; bool wp_copy; - bool zeropage = (mcopy_mode == MCOPY_ATOMIC_ZEROPAGE); /* * Sanitize the command parameters: @@ -530,7 +531,7 @@ static __always_inline ssize_t __mcopy_atomic(struct mm_struct *dst_mm, if (!vma_is_anonymous(dst_vma) && !vma_is_shmem(dst_vma)) goto out_unlock; - if (mcopy_mode == MCOPY_ATOMIC_CONTINUE) + if (!vma_is_shmem(dst_vma) && mcopy_mode == MCOPY_ATOMIC_CONTINUE) goto out_unlock; /* @@ -578,7 +579,7 @@ static __always_inline ssize_t __mcopy_atomic(struct mm_struct *dst_mm, BUG_ON(pmd_trans_huge(*dst_pmd)); err = mfill_atomic_pte(dst_mm, dst_pmd, dst_vma, dst_addr, - src_addr, &page, zeropage, wp_copy); + src_addr, &page, mcopy_mode, wp_copy); cond_resched(); if (unlikely(err == -ENOENT)) { From patchwork Tue Mar 2 00:01:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 12111489 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9272C433E0 for ; Tue, 2 Mar 2021 12:01:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9A5EE60232 for ; Tue, 2 Mar 2021 12:01:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349900AbhCBLmG (ORCPT ); Tue, 2 Mar 2021 06:42:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49780 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243067AbhCBADO (ORCPT ); Mon, 1 Mar 2021 19:03:14 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 26DDDC0617AA for ; Mon, 1 Mar 2021 16:01:42 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id l10so20876787ybt.6 for ; Mon, 01 Mar 2021 16:01:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=y5AgFuKFjXcZgGVnYccTGhQxqZidPq3Pqfu1gkT294w=; b=VSKi8Uazg5pOrsOQDwoO0e8L17WYuvQ9E516op7e0cs2yyo8d2NNBPqObvyWHLOAwn XE9nJvogR4M6YcywOisP0eWKlk4psXjlBtES03cZe0F/IvaM1jHxKq+XuzmIeaxMX/M5 VXTzbwEAZ46raW4Vyj73iOzSTqHNSZvKuj6FDVFj9StOGR5RMqPKiV9L+w2BcpRtnJCc yE2OhGlJNXpCm5tCJyVeZpUMs39A3IzURZD54wKhTKgDpEySBpdIyH82zQr896FTrRwL 2G7KUAfHqIJQIzQyE+YEFjXgyxYBKshaJ+ND9MQjmzLTDp4P88KohT2HRMlSQ97SsYWE GyYA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=y5AgFuKFjXcZgGVnYccTGhQxqZidPq3Pqfu1gkT294w=; b=Ru19U2bSk0TJNUbhUTkiCaMqeuKD2/XwLmq2Xg9iaXzB6++ZAngaZIx2Qv6YEbJPYz NF1kWZjwPvHD5N8YuiVCruOS8lX8F/RVEuvcmczCHI+Vch37euKNXdQ0Mh7AJDsx2MWi JXQZ5flrZF2MSEfbXu76+mpM32pZ1XMW55jx05jwYnaFOIweVFaOuSHbnm6h0i79KbXD jheAicf0uvkn6diDFXGJBwgoYWxoiHO1/KIudgLshIgUi5G2c9A55nOX4WoLtdOmIbkf RKEhwvIDjK//OAtHQlbL2T4fxrT5MRw+WNec2KQ75EvuhJzGCAAGQkC7njevjSpDhgTj Xltg== X-Gm-Message-State: AOAM533XlELgIV9EZW/iKO/U4+/dxrb/Crgc+MxPZisaMsfMz/GliaZr gMpbfXwAX+/vT6q+gfXkyyw6JtBsGH36qtxSJrlj X-Google-Smtp-Source: ABdhPJyUkE1v6fhIe1n63qSf8TM18Pp21b/DiI1yIY1WVGMZb4n0zBTNQKnP1QZCYz2ewcv5iQ5K3sSeauTHPP199gqu Sender: "axelrasmussen via sendgmr" X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:1998:8165:ca50:ab8d]) (user=axelrasmussen job=sendgmr) by 2002:a5b:851:: with SMTP id v17mr27988049ybq.55.1614643301329; Mon, 01 Mar 2021 16:01:41 -0800 (PST) Date: Mon, 1 Mar 2021 16:01:30 -0800 In-Reply-To: <20210302000133.272579-1-axelrasmussen@google.com> Message-Id: <20210302000133.272579-3-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210302000133.272579-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH v2 2/5] userfaultfd/selftests: use memfd_create for shmem test type From: Axel Rasmussen To: Alexander Viro , Andrea Arcangeli , Andrew Morton , Hugh Dickins , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Rapoport , Peter Xu , Shaohua Li , Shuah Khan , Wang Qing Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, Axel Rasmussen , Brian Geffon , Cannon Matthews , "Dr . David Alan Gilbert" , David Rientjes , Michel Lespinasse , Mina Almasry , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This is a preparatory commit. In the future, we want to be able to setup alias mappings for area_src and area_dst in the shmem test, like we do in the hugetlb_shared test. With a VMA obtained via mmap(MAP_ANONYMOUS | MAP_SHARED), it isn't clear how to do this. So, mmap() with an fd, so we can create alias mappings. Use memfd_create instead of actually passing in a tmpfs path like hugetlb does, since it's more convenient / simpler to run, and works just as well. Future commits will: 1. Setup the alias mappings. 2. Extend our tests to actually take advantage of this, to test new userfaultfd behavior being introduced in this series. Also, a small fix in the area we're changing: when the hugetlb setup fails in main(), pass in the right argv[] so we actually print out the hugetlb file path. Signed-off-by: Axel Rasmussen --- tools/testing/selftests/vm/userfaultfd.c | 35 ++++++++++++++++++++---- 1 file changed, 30 insertions(+), 5 deletions(-) diff --git a/tools/testing/selftests/vm/userfaultfd.c b/tools/testing/selftests/vm/userfaultfd.c index f5ab5e0312e7..859398efb4fe 100644 --- a/tools/testing/selftests/vm/userfaultfd.c +++ b/tools/testing/selftests/vm/userfaultfd.c @@ -85,6 +85,7 @@ static bool test_uffdio_wp = false; static bool test_uffdio_minor = false; static bool map_shared; +static int shm_fd; static int huge_fd; static char *huge_fd_off0; static unsigned long long *count_verify; @@ -297,12 +298,20 @@ static int shmem_release_pages(char *rel_area) static void shmem_allocate_area(void **alloc_area) { + unsigned long offset = + alloc_area == (void **)&area_src ? 0 : nr_pages * page_size; + *alloc_area = mmap(NULL, nr_pages * page_size, PROT_READ | PROT_WRITE, - MAP_ANONYMOUS | MAP_SHARED, -1, 0); + MAP_SHARED, shm_fd, offset); if (*alloc_area == MAP_FAILED) { - fprintf(stderr, "shared memory mmap failed\n"); - *alloc_area = NULL; + perror("mmap of memfd failed"); + goto fail; } + + return; + +fail: + *alloc_area = NULL; } struct uffd_test_ops { @@ -1672,15 +1681,31 @@ int main(int argc, char **argv) usage(); huge_fd = open(argv[4], O_CREAT | O_RDWR, 0755); if (huge_fd < 0) { - fprintf(stderr, "Open of %s failed", argv[3]); + fprintf(stderr, "Open of %s failed", argv[4]); perror("open"); exit(1); } if (ftruncate(huge_fd, 0)) { - fprintf(stderr, "ftruncate %s to size 0 failed", argv[3]); + fprintf(stderr, "ftruncate %s to size 0 failed", argv[4]); perror("ftruncate"); exit(1); } + } else if (test_type == TEST_SHMEM) { + shm_fd = memfd_create(argv[0], 0); + if (shm_fd < 0) { + perror("memfd_create"); + exit(1); + } + if (ftruncate(shm_fd, nr_pages * page_size * 2)) { + perror("ftruncate"); + exit(1); + } + if (fallocate(shm_fd, + FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, 0, + nr_pages * page_size * 2)) { + perror("fallocate"); + exit(1); + } } printf("nr_pages: %lu, nr_pages_per_cpu: %lu\n", nr_pages, nr_pages_per_cpu); From patchwork Tue Mar 2 00:01:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 12111501 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4879CC433E0 for ; Tue, 2 Mar 2021 12:01:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 023ED64F09 for ; Tue, 2 Mar 2021 12:01:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349960AbhCBLoS (ORCPT ); Tue, 2 Mar 2021 06:44:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49804 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239044AbhCBADT (ORCPT ); Mon, 1 Mar 2021 19:03:19 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 08C4DC06121D for ; Mon, 1 Mar 2021 16:01:44 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id o9so20514685yba.18 for ; Mon, 01 Mar 2021 16:01:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=ga+3+HU/1O4PWNGcBy73JW1dDRIoBKvdkJlksQhtYJk=; b=uZhxFptRedsgKaKz9OLDGSC/FW+v3PDlF/1VW4uc6WxsSJPdzbOnoaHonj62Tpf2Rp 9djnZP2FZSRaquC1Z35N7BlgMdnQY+YTOj3lSji670HF0XsnEilodOWgZ3y0Yvz7azqc eIHFdcTK0ubG29dgJPKjYXW95boO5NDDIZNy3ZMda6PAMjQisn6LfG1k0JI/gvy0ieab aMZ7kckpt2US41i92QXtYfdDShOJJt9G65y9cAbQsCU72ljS2chnl690UVfwkIad3wzM G3ENscyoG3eNGlEckLNaECjJMKHGCwPlGDY56IqtzEaKnYAi//P0StXf2J3gu3PcFcnW /S7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ga+3+HU/1O4PWNGcBy73JW1dDRIoBKvdkJlksQhtYJk=; b=NYfUIVSWhbnrarvQNNl0BFLGLlk958CWy15SKAvmdbSKH8qmFP2rRJwQme+nBXDLQe uOFnK5rz+h70C8/ekjydObDjty0nOZrWeTy5nV3oXeJK11+gmGiFSjb2q2nCyo5f+vif QPJPpYfFZzhhPxBcd0V5CwHDt8BHXQIRzpTj0vXQJMEJsP2/MbgTvfDc4cXy6o22+mfc ymnSWbRl5mReh4mX5HlB47gZG9h9hWrIBw62MFb3Wm3XijQnvMoDk0Pt4pZ0B92xHWZA IxP4YHVOrztO7s3jChgthyxMkMLHlPWtY06TjYZWwdbWLxeINVlvMfBWcCcqVe4FzQvT TSTg== X-Gm-Message-State: AOAM531Y7Ysgu4UpJeHsIA3LrqR+LNiVbprJtKrwLu2IU9ivOQvuengN Z6+eFAZ8kzmaxUiknq8nZ7z+yWDVRcktkU/2Sqxj X-Google-Smtp-Source: ABdhPJy2fJwEaq8pxSZCQdXMcW7Yms/ycMK7J/aYvRmsWqsh5YosxRt/wHxAvNdhN8dxc4qvYBp3ldGNeoeAhoFzj5wb Sender: "axelrasmussen via sendgmr" X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:1998:8165:ca50:ab8d]) (user=axelrasmussen job=sendgmr) by 2002:a25:aae2:: with SMTP id t89mr29309195ybi.63.1614643303192; Mon, 01 Mar 2021 16:01:43 -0800 (PST) Date: Mon, 1 Mar 2021 16:01:31 -0800 In-Reply-To: <20210302000133.272579-1-axelrasmussen@google.com> Message-Id: <20210302000133.272579-4-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210302000133.272579-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH v2 3/5] userfaultfd/selftests: create alias mappings in the shmem test From: Axel Rasmussen To: Alexander Viro , Andrea Arcangeli , Andrew Morton , Hugh Dickins , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Rapoport , Peter Xu , Shaohua Li , Shuah Khan , Wang Qing Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, Axel Rasmussen , Brian Geffon , Cannon Matthews , "Dr . David Alan Gilbert" , David Rientjes , Michel Lespinasse , Mina Almasry , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Previously, we just allocated two shm areas: area_src and area_dst. With this commit, change this so we also allocate area_src_alias, and area_dst_alias. area_*_alias and area_* (respectively) point to the same underlying physical pages, but are different VMAs. In a future commit in this series, we'll leverage this setup to exercise minor fault handling support for shmem, just like we do in the hugetlb_shared test. Signed-off-by: Axel Rasmussen --- tools/testing/selftests/vm/userfaultfd.c | 29 +++++++++++++++++++++--- 1 file changed, 26 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/vm/userfaultfd.c b/tools/testing/selftests/vm/userfaultfd.c index 859398efb4fe..4a18590fe0f8 100644 --- a/tools/testing/selftests/vm/userfaultfd.c +++ b/tools/testing/selftests/vm/userfaultfd.c @@ -298,8 +298,9 @@ static int shmem_release_pages(char *rel_area) static void shmem_allocate_area(void **alloc_area) { - unsigned long offset = - alloc_area == (void **)&area_src ? 0 : nr_pages * page_size; + void *area_alias = NULL; + bool is_src = alloc_area == (void **)&area_src; + unsigned long offset = is_src ? 0 : nr_pages * page_size; *alloc_area = mmap(NULL, nr_pages * page_size, PROT_READ | PROT_WRITE, MAP_SHARED, shm_fd, offset); @@ -308,12 +309,34 @@ static void shmem_allocate_area(void **alloc_area) goto fail; } + area_alias = mmap(NULL, nr_pages * page_size, PROT_READ | PROT_WRITE, + MAP_SHARED, shm_fd, offset); + if (area_alias == MAP_FAILED) { + perror("mmap of memfd alias failed"); + goto fail_munmap; + } + + if (is_src) + area_src_alias = area_alias; + else + area_dst_alias = area_alias; + return; +fail_munmap: + if (munmap(*alloc_area, nr_pages * page_size) < 0) { + perror("munmap of memfd failed\n"); + exit(1); + } fail: *alloc_area = NULL; } +static void shmem_alias_mapping(__u64 *start, size_t len, unsigned long offset) +{ + *start = (unsigned long)area_dst_alias + offset; +} + struct uffd_test_ops { unsigned long expected_ioctls; void (*allocate_area)(void **alloc_area); @@ -341,7 +364,7 @@ static struct uffd_test_ops shmem_uffd_test_ops = { .expected_ioctls = SHMEM_EXPECTED_IOCTLS, .allocate_area = shmem_allocate_area, .release_pages = shmem_release_pages, - .alias_mapping = noop_alias_mapping, + .alias_mapping = shmem_alias_mapping, }; static struct uffd_test_ops hugetlb_uffd_test_ops = { From patchwork Tue Mar 2 00:01:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 12111491 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35F9AC433DB for ; Tue, 2 Mar 2021 12:01:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DB9E564F09 for ; Tue, 2 Mar 2021 12:01:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349943AbhCBLnr (ORCPT ); Tue, 2 Mar 2021 06:43:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49812 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239465AbhCBADT (ORCPT ); Mon, 1 Mar 2021 19:03:19 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DC885C06121F for ; Mon, 1 Mar 2021 16:01:45 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id j4so20527731ybt.23 for ; Mon, 01 Mar 2021 16:01:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=/0nvQXiTSNnELkxemnMOwYnxgNbtX6qanxURPmXUgOc=; b=nZbBey6SU/Nwf4pBuR1BjceoWR3Ay8UCWRn44GbRUBn0OLkddGxnPLlpcQMsm/MYD9 W4Kcp/AJi4339FF3c57tGpSnBorUtzNlmY8jx6IOEeU38ZcZOehpi/W34X/54sX7X33C umQX158SlDUlphDGZUiSH8K04vt981MIRTiFRevMcWvR6B/9dmIdJwm1+ccL4mHbZgWO GJaXfKFdG6xjplPOEztVkr0GjIn23Fcbqaayibgt7JFGqnzp1oW2CbsQ7DaDFBTdvNE4 7irW8ZsqLi06MLZY55nClIhIw/AnHRFkEgbsWVBamkSop0tJyR86fd7StM71LOgeIEzz MLXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=/0nvQXiTSNnELkxemnMOwYnxgNbtX6qanxURPmXUgOc=; b=fkeljB24awgLUg6ikndB1yCjZkeV+4zo0lO+/mjw+aGmskRfaZ/W2X+evPYe1asYyp zTcfQqvWWhBY1fiEiFLLNIrXoEW6yisOHjcmayW3p0uIbxD3BjlHHDmVbdTr46H+GmBW Cxrvq2CunQTch5vQy0Tw544VDA/r32tifNJRPhgrOpp8PCn0qHX1BtF4HqtDBR36Sfgg gcHe4GoFp0EeRWEKsfqOR0x8Lbw3WtXS3tJ3qDMPD/Tmpp1qZuoy0vTUiFoKiQBIkJyO scXDZTctgM0JYJ9IeYjd+JCiBI5bNNcCWQRLurG59/k/rdm4sMN4KYNA8ZWkpD2sNk3M UreA== X-Gm-Message-State: AOAM531Hh5GaeLYg1Xes71kQlKXrr82vINXvKOKZDp6afZ/IkN+LyJWE 0woYaqC7RGrgQE8RjaFY45CS9YUffsPq6i7BEV+x X-Google-Smtp-Source: ABdhPJzYaFj+BvuEcV9JsZ/4+8GOWQ8Y9K0378dugEDB53uXYSUMUVUgl2XFgEjydyM5VrAstKEoyXQiOt1F3homYpTP Sender: "axelrasmussen via sendgmr" X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:1998:8165:ca50:ab8d]) (user=axelrasmussen job=sendgmr) by 2002:a25:d650:: with SMTP id n77mr27823692ybg.69.1614643305085; Mon, 01 Mar 2021 16:01:45 -0800 (PST) Date: Mon, 1 Mar 2021 16:01:32 -0800 In-Reply-To: <20210302000133.272579-1-axelrasmussen@google.com> Message-Id: <20210302000133.272579-5-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210302000133.272579-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH v2 4/5] userfaultfd/selftests: reinitialize test context in each test From: Axel Rasmussen To: Alexander Viro , Andrea Arcangeli , Andrew Morton , Hugh Dickins , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Rapoport , Peter Xu , Shaohua Li , Shuah Khan , Wang Qing Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, Axel Rasmussen , Brian Geffon , Cannon Matthews , "Dr . David Alan Gilbert" , David Rientjes , Michel Lespinasse , Mina Almasry , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Currently, the context (fds, mmap-ed areas, etc.) are global. Each test mutates this state in some way, in some cases really "clobbering it" (e.g., the events test mremap-ing area_dst over the top of area_src, or the minor faults tests overwriting the count_verify values in the test areas). We run the tests in a particular order, each test is careful to make the right assumptions about its starting state, etc. But, this is fragile. It's better for a test's success or failure to not depend on what some other prior test case did to the global state. To that end, clear and reinitialize the test context at the start of each test case, so whatever prior test cases did doesn't affect future tests. This is particularly relevant to this series because the events test's mremap of area_dst screws up assumptions the minor fault test was relying on. This wasn't a problem for hugetlb, as we don't mremap in that case. Signed-off-by: Axel Rasmussen --- tools/testing/selftests/vm/userfaultfd.c | 249 ++++++++++++++--------- 1 file changed, 151 insertions(+), 98 deletions(-) diff --git a/tools/testing/selftests/vm/userfaultfd.c b/tools/testing/selftests/vm/userfaultfd.c index 4a18590fe0f8..5183ddb3080d 100644 --- a/tools/testing/selftests/vm/userfaultfd.c +++ b/tools/testing/selftests/vm/userfaultfd.c @@ -89,7 +89,8 @@ static int shm_fd; static int huge_fd; static char *huge_fd_off0; static unsigned long long *count_verify; -static int uffd, uffd_flags, finished, *pipefd; +static int uffd = -1; +static int uffd_flags, finished, *pipefd; static char *area_src, *area_src_alias, *area_dst, *area_dst_alias; static char *zeropage; pthread_attr_t attr; @@ -376,6 +377,146 @@ static struct uffd_test_ops hugetlb_uffd_test_ops = { static struct uffd_test_ops *uffd_test_ops; +static int userfaultfd_open(uint64_t *features) +{ + struct uffdio_api uffdio_api; + + uffd = syscall(__NR_userfaultfd, O_CLOEXEC | O_NONBLOCK); + if (uffd < 0) { + fprintf(stderr, + "userfaultfd syscall not available in this kernel\n"); + return 1; + } + uffd_flags = fcntl(uffd, F_GETFD, NULL); + + uffdio_api.api = UFFD_API; + uffdio_api.features = *features; + if (ioctl(uffd, UFFDIO_API, &uffdio_api)) { + fprintf(stderr, "UFFDIO_API failed.\nPlease make sure to " + "run with either root or ptrace capability.\n"); + return 1; + } + if (uffdio_api.api != UFFD_API) { + fprintf(stderr, "UFFDIO_API error: %" PRIu64 "\n", + (uint64_t)uffdio_api.api); + return 1; + } + + *features = uffdio_api.features; + return 0; +} + +static int uffd_test_ctx_init_ext(uint64_t *features) +{ + unsigned long nr, cpu; + + uffd_test_ops->allocate_area((void **)&area_src); + if (!area_src) + return 1; + uffd_test_ops->allocate_area((void **)&area_dst); + if (!area_dst) + return 1; + + if (uffd_test_ops->release_pages(area_src)) + return 1; + + if (uffd_test_ops->release_pages(area_dst)) + return 1; + + if (userfaultfd_open(features)) + return 1; + + count_verify = malloc(nr_pages * sizeof(unsigned long long)); + if (!count_verify) { + perror("count_verify"); + return 1; + } + + for (nr = 0; nr < nr_pages; nr++) { + *area_mutex(area_src, nr) = + (pthread_mutex_t)PTHREAD_MUTEX_INITIALIZER; + count_verify[nr] = *area_count(area_src, nr) = 1; + /* + * In the transition between 255 to 256, powerpc will + * read out of order in my_bcmp and see both bytes as + * zero, so leave a placeholder below always non-zero + * after the count, to avoid my_bcmp to trigger false + * positives. + */ + *(area_count(area_src, nr) + 1) = 1; + } + + pipefd = malloc(sizeof(int) * nr_cpus * 2); + if (!pipefd) { + perror("pipefd"); + return 1; + } + for (cpu = 0; cpu < nr_cpus; cpu++) { + if (pipe2(&pipefd[cpu * 2], O_CLOEXEC | O_NONBLOCK)) { + perror("pipe"); + return 1; + } + } + + return 0; +} + +static inline int uffd_test_ctx_init(uint64_t features) +{ + return uffd_test_ctx_init_ext(&features); +} + +static inline int munmap_area(void **area) +{ + if (*area) { + if (munmap(*area, nr_pages * page_size)) { + perror("munmap"); + return 1; + } + } + + *area = NULL; + return 0; +} + +static int uffd_test_ctx_clear(void) +{ + int ret = 0; + size_t i; + + if (pipefd) { + for (i = 0; i < nr_cpus * 2; ++i) { + if (close(pipefd[i])) { + perror("close pipefd"); + ret = 1; + } + } + free(pipefd); + pipefd = NULL; + } + + if (count_verify) { + free(count_verify); + count_verify = NULL; + } + + if (uffd != -1) { + if (close(uffd)) { + perror("close uffd"); + ret = 1; + } + uffd = -1; + } + + huge_fd_off0 = NULL; + ret |= munmap_area((void **)&area_src); + ret |= munmap_area((void **)&area_src_alias); + ret |= munmap_area((void **)&area_dst); + ret |= munmap_area((void **)&area_dst_alias); + + return ret; +} + static int my_bcmp(char *str1, char *str2, size_t n) { unsigned long i; @@ -859,40 +1000,6 @@ static int stress(struct uffd_stats *uffd_stats) return 0; } -static int userfaultfd_open_ext(uint64_t *features) -{ - struct uffdio_api uffdio_api; - - uffd = syscall(__NR_userfaultfd, O_CLOEXEC | O_NONBLOCK); - if (uffd < 0) { - fprintf(stderr, - "userfaultfd syscall not available in this kernel\n"); - return 1; - } - uffd_flags = fcntl(uffd, F_GETFD, NULL); - - uffdio_api.api = UFFD_API; - uffdio_api.features = *features; - if (ioctl(uffd, UFFDIO_API, &uffdio_api)) { - fprintf(stderr, "UFFDIO_API failed.\nPlease make sure to " - "run with either root or ptrace capability.\n"); - return 1; - } - if (uffdio_api.api != UFFD_API) { - fprintf(stderr, "UFFDIO_API error: %" PRIu64 "\n", - (uint64_t)uffdio_api.api); - return 1; - } - - *features = uffdio_api.features; - return 0; -} - -static int userfaultfd_open(uint64_t features) -{ - return userfaultfd_open_ext(&features); -} - sigjmp_buf jbuf, *sigbuf; static void sighndl(int sig, siginfo_t *siginfo, void *ptr) @@ -1010,6 +1117,8 @@ static int faulting_process(int signal_test) perror("mremap"); exit(1); } + /* Reset area_src since we just clobbered it */ + area_src = NULL; for (; nr < nr_pages; nr++) { count = *area_count(area_dst, nr); @@ -1113,11 +1222,9 @@ static int userfaultfd_zeropage_test(void) printf("testing UFFDIO_ZEROPAGE: "); fflush(stdout); - if (uffd_test_ops->release_pages(area_dst)) + if (uffd_test_ctx_clear() || uffd_test_ctx_init(0)) return 1; - if (userfaultfd_open(0)) - return 1; uffdio_register.range.start = (unsigned long) area_dst; uffdio_register.range.len = nr_pages * page_size; uffdio_register.mode = UFFDIO_REGISTER_MODE_MISSING; @@ -1143,7 +1250,6 @@ static int userfaultfd_zeropage_test(void) } } - close(uffd); printf("done.\n"); return 0; } @@ -1161,13 +1267,11 @@ static int userfaultfd_events_test(void) printf("testing events (fork, remap, remove): "); fflush(stdout); - if (uffd_test_ops->release_pages(area_dst)) - return 1; - features = UFFD_FEATURE_EVENT_FORK | UFFD_FEATURE_EVENT_REMAP | UFFD_FEATURE_EVENT_REMOVE; - if (userfaultfd_open(features)) + if (uffd_test_ctx_clear() || uffd_test_ctx_init(features)) return 1; + fcntl(uffd, F_SETFL, uffd_flags | O_NONBLOCK); uffdio_register.range.start = (unsigned long) area_dst; @@ -1213,8 +1317,6 @@ static int userfaultfd_events_test(void) if (pthread_join(uffd_mon, NULL)) return 1; - close(uffd); - uffd_stats_report(&stats, 1); return stats.missing_faults != nr_pages; @@ -1234,12 +1336,10 @@ static int userfaultfd_sig_test(void) printf("testing signal delivery: "); fflush(stdout); - if (uffd_test_ops->release_pages(area_dst)) - return 1; - features = UFFD_FEATURE_EVENT_FORK|UFFD_FEATURE_SIGBUS; - if (userfaultfd_open(features)) + if (uffd_test_ctx_clear() || uffd_test_ctx_init(features)) return 1; + fcntl(uffd, F_SETFL, uffd_flags | O_NONBLOCK); uffdio_register.range.start = (unsigned long) area_dst; @@ -1297,7 +1397,6 @@ static int userfaultfd_sig_test(void) if (userfaults) fprintf(stderr, "Signal test failed, userfaults: %ld\n", userfaults); - close(uffd); return userfaults != 0; } @@ -1319,10 +1418,7 @@ static int userfaultfd_minor_test(void) printf("testing minor faults: "); fflush(stdout); - if (uffd_test_ops->release_pages(area_dst)) - return 1; - - if (userfaultfd_open_ext(&features)) + if (uffd_test_ctx_clear() || uffd_test_ctx_init_ext(&features)) return 1; /* If kernel reports the feature isn't supported, skip the test. */ if (!(features & UFFD_FEATURE_MINOR_HUGETLBFS)) { @@ -1390,8 +1486,6 @@ static int userfaultfd_minor_test(void) if (pthread_join(uffd_mon, NULL)) return 1; - close(uffd); - uffd_stats_report(&stats, 1); return stats.missing_faults != 0 || stats.minor_faults != nr_pages; @@ -1403,52 +1497,12 @@ static int userfaultfd_stress(void) char *tmp_area; unsigned long nr; struct uffdio_register uffdio_register; - unsigned long cpu; int err; struct uffd_stats uffd_stats[nr_cpus]; - uffd_test_ops->allocate_area((void **)&area_src); - if (!area_src) - return 1; - uffd_test_ops->allocate_area((void **)&area_dst); - if (!area_dst) - return 1; - - if (userfaultfd_open(0)) + if (uffd_test_ctx_init(0)) return 1; - count_verify = malloc(nr_pages * sizeof(unsigned long long)); - if (!count_verify) { - perror("count_verify"); - return 1; - } - - for (nr = 0; nr < nr_pages; nr++) { - *area_mutex(area_src, nr) = (pthread_mutex_t) - PTHREAD_MUTEX_INITIALIZER; - count_verify[nr] = *area_count(area_src, nr) = 1; - /* - * In the transition between 255 to 256, powerpc will - * read out of order in my_bcmp and see both bytes as - * zero, so leave a placeholder below always non-zero - * after the count, to avoid my_bcmp to trigger false - * positives. - */ - *(area_count(area_src, nr) + 1) = 1; - } - - pipefd = malloc(sizeof(int) * nr_cpus * 2); - if (!pipefd) { - perror("pipefd"); - return 1; - } - for (cpu = 0; cpu < nr_cpus; cpu++) { - if (pipe2(&pipefd[cpu*2], O_CLOEXEC | O_NONBLOCK)) { - perror("pipe"); - return 1; - } - } - if (posix_memalign(&area, page_size, page_size)) { fprintf(stderr, "out of memory\n"); return 1; @@ -1593,7 +1647,6 @@ static int userfaultfd_stress(void) if (err) return err; - close(uffd); return userfaultfd_zeropage_test() || userfaultfd_sig_test() || userfaultfd_events_test() || userfaultfd_minor_test(); } From patchwork Tue Mar 2 00:01:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 12111503 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B4C4C433DB for ; Tue, 2 Mar 2021 12:02:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6F61364F09 for ; Tue, 2 Mar 2021 12:02:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349977AbhCBLpB (ORCPT ); Tue, 2 Mar 2021 06:45:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50018 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237833AbhCBAEb (ORCPT ); Mon, 1 Mar 2021 19:04:31 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B4271C061223 for ; Mon, 1 Mar 2021 16:01:47 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id s187so20731553ybs.22 for ; Mon, 01 Mar 2021 16:01:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=Ykwv7UPOg2gnv7U6N4+XsXv4MVQdTbl3lFORivTI5qU=; b=Ike7yisTth4MPXILsDu/yWLMoEt9B+Ui8PttQAbq3inRV2gVHSVxandNIzZpd5ZqDG yTzLI6fh7QLGqwwBZzvml3GG4wf8k4furX6FEBBT5/pscBRVuCf1bHOjj+18ezm3sO+y BqRXkF6w+rmlczom6l9qwgJloIYYoBPtB+krv83i3eij0jETbsJDKTVGUQeLmEL5wt41 BC0zJ/5ipdyI68rlqEtHjblDSX6sLqucKcS4j7LGMiDOnL1HVnDJUqvRC6T/VQT5h+wZ A+ezDf0lPKYbc4quZ5oy3GxsVg2cbSZh6nVcGeYr6UNw8HpPCKw+Bx8wrOWJfFoAyFTA 7Dlw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Ykwv7UPOg2gnv7U6N4+XsXv4MVQdTbl3lFORivTI5qU=; b=MNDh4II2ruZSNJqbgIdtgXLttc93wIMJemoCmcLm+5oQatx4WjQrwUh5A8cIZIN/R7 TxOEp78NAXtexIX5KcruRMsAIsh528ylO3n32VSIM7WQXiFopbo0huFgbI8Tlhnp1Ecp WodGcBfyp7akZLC7hoKoy9cnPIZ6ad/onVTINTkwwHe/zSuM8Cdu6f5SwrfJDVvq0ZLH +9/GrHTdFun4PGg6jBKAnbsCuvbbRo5m0fya3FFXQ30QC8WsyZQd7ZBdY3fMJWTg9vIY ZBdwcHlXC/yMxdAaqZ/0Sdv1p6uYOs31tzySbhDuwFT90Edhkc2Xem3ooi5vGTZAK0XD 5idg== X-Gm-Message-State: AOAM532c28HWsrikloKCEUtNq/PZUmIQ76qWs8XECuyKoJ4rbKs50+Ta P2OULPzS8Rm+ACnR0TbygZeOLHHOGJt0DU/6h1Qg X-Google-Smtp-Source: ABdhPJw+SgNIXkt4ryxrzUm5ivUNXVG5tdt4ortQaA1qH8YQMU1a/tSZgm1uL1DTR5qkBDiLx57yLwrv3CYpsnXapASo Sender: "axelrasmussen via sendgmr" X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:1998:8165:ca50:ab8d]) (user=axelrasmussen job=sendgmr) by 2002:a25:e08b:: with SMTP id x133mr26514883ybg.138.1614643306922; Mon, 01 Mar 2021 16:01:46 -0800 (PST) Date: Mon, 1 Mar 2021 16:01:33 -0800 In-Reply-To: <20210302000133.272579-1-axelrasmussen@google.com> Message-Id: <20210302000133.272579-6-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210302000133.272579-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH v2 5/5] userfaultfd/selftests: exercise minor fault handling shmem support From: Axel Rasmussen To: Alexander Viro , Andrea Arcangeli , Andrew Morton , Hugh Dickins , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Rapoport , Peter Xu , Shaohua Li , Shuah Khan , Wang Qing Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, Axel Rasmussen , Brian Geffon , Cannon Matthews , "Dr . David Alan Gilbert" , David Rientjes , Michel Lespinasse , Mina Almasry , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Enable test_uffdio_minor for test_type == TEST_SHMEM, and modify the test slightly to pass in / check for the right feature flags. Signed-off-by: Axel Rasmussen --- tools/testing/selftests/vm/userfaultfd.c | 19 ++++++++++++++----- 1 file changed, 14 insertions(+), 5 deletions(-) diff --git a/tools/testing/selftests/vm/userfaultfd.c b/tools/testing/selftests/vm/userfaultfd.c index 5183ddb3080d..f31e9a4edc55 100644 --- a/tools/testing/selftests/vm/userfaultfd.c +++ b/tools/testing/selftests/vm/userfaultfd.c @@ -1410,7 +1410,7 @@ static int userfaultfd_minor_test(void) void *expected_page; char c; struct uffd_stats stats = { 0 }; - uint64_t features = UFFD_FEATURE_MINOR_HUGETLBFS; + uint64_t req_features, features_out; if (!test_uffdio_minor) return 0; @@ -1418,10 +1418,18 @@ static int userfaultfd_minor_test(void) printf("testing minor faults: "); fflush(stdout); - if (uffd_test_ctx_clear() || uffd_test_ctx_init_ext(&features)) + if (test_type == TEST_HUGETLB) + req_features = UFFD_FEATURE_MINOR_HUGETLBFS; + else if (test_type == TEST_SHMEM) + req_features = UFFD_FEATURE_MINOR_SHMEM; + else + return 1; + + features_out = req_features; + if (uffd_test_ctx_clear() || uffd_test_ctx_init_ext(&features_out)) return 1; - /* If kernel reports the feature isn't supported, skip the test. */ - if (!(features & UFFD_FEATURE_MINOR_HUGETLBFS)) { + /* If kernel reports required features aren't supported, skip test. */ + if ((features_out & req_features) != req_features) { printf("skipping test due to lack of feature support\n"); fflush(stdout); return 0; @@ -1431,7 +1439,7 @@ static int userfaultfd_minor_test(void) uffdio_register.range.len = nr_pages * page_size; uffdio_register.mode = UFFDIO_REGISTER_MODE_MINOR; if (ioctl(uffd, UFFDIO_REGISTER, &uffdio_register)) { - fprintf(stderr, "register failure\n"); + perror("register failure"); exit(1); } @@ -1695,6 +1703,7 @@ static void set_test_type(const char *type) map_shared = true; test_type = TEST_SHMEM; uffd_test_ops = &shmem_uffd_test_ops; + test_uffdio_minor = true; } else { fprintf(stderr, "Unknown test type: %s\n", type); exit(1); }