From patchwork Thu May 13 23:43:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 12256853 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-25.2 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,MISSING_HEADERS, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E2FDC433ED for ; Thu, 13 May 2021 23:43:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9AF4F61206 for ; Thu, 13 May 2021 23:43:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9AF4F61206 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D2EAE6B0036; Thu, 13 May 2021 19:43:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CDE3F6B006E; Thu, 13 May 2021 19:43:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B09716B0070; Thu, 13 May 2021 19:43:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0021.hostedemail.com [216.40.44.21]) by kanga.kvack.org (Postfix) with ESMTP id 809AC6B0036 for ; Thu, 13 May 2021 19:43:16 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 165A418022965 for ; Thu, 13 May 2021 23:43:16 +0000 (UTC) X-FDA: 78137836392.12.1C0CB58 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by imf20.hostedemail.com (Postfix) with ESMTP id BB1E113A for ; Thu, 13 May 2021 23:43:14 +0000 (UTC) Received: by mail-yb1-f201.google.com with SMTP id z13-20020a25ad8d0000b02904f9f8375b61so4288725ybi.20 for ; Thu, 13 May 2021 16:43:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:message-id:mime-version:subject:from:cc; bh=mgCPCgdprxmYZoNJ6dxSWS85TFE46FQZIbQza3csT2A=; b=jHc7PEpaBYzpYwhQ9RlhsYI6GGQOBwYkR+j07/jDUztSxw1UWfvOotmHPC5NS4INGA HAXEhHRB8F+mbuegS2oEzBY6wBjMU3MSsrvHEkLAlNun3Aa/ipUbheR1Sh1PMoA+Qoi6 Ie3Vum+oogoxjr3WY7jkZC/p7XlPUURs8HNeLjZH9fB+/jmqZ26knU5shDh0PaqsofIk kwXEgDbSBTmroxq2vUOqqEiwg6kW0knheg59NtFgDxqzxiYPEw0UukiXL6L+WLfD+Cfq ryWsAtEF9sgg68nqYojiPK3r8HznZ0tvNjS2wmHDrSB7189CwljiRW0FbM8BLXx8cn4t hPfQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:message-id:mime-version:subject:from:cc; bh=mgCPCgdprxmYZoNJ6dxSWS85TFE46FQZIbQza3csT2A=; b=IypdP89f9myaiTQPhUlQpacsAEpotMPuOBVNnGvyUFrJtdABho9wHOicNyA4ShJgB8 qNYJBYMSPu63MZ6EEmrxoc4z34f5bgEfT4DFmAFSi3QIZDZSwbECtOa3qa/NmEM3UZut jesbRBJ1gAP2FlybX2dSsgdcxEsc4m2D2C1ILhhBiJYKPEGe9HheZrCA9bbmKhRj7Ag0 JOS7dXfkaKfnDzXXfmNZKs09584rg/dPB1QN4sdVq2pcvPXQjc9cXfdqDw475B1Qo5Lb vMklHyY3HA6aGhOPAmXe8eP0VImHfZIMXY6ocqs2YNNZfQKtIUB6OXHwUbG+L3ZkVmst jq5Q== X-Gm-Message-State: AOAM533WVDgkh9oyk9znxOkxbvmPaV/QMV+fJyFSk75j8RJAIMSHMDBF sMaaKPKjD6tnJKhZsqO+crRvZXzkDffBqYDKMw== X-Google-Smtp-Source: ABdhPJw9jVKOv2ZVaJ9dodUk24Jp2FdqscYl1m440OA8l/MY2GYCL1Jot1pw0/XoE2+sX3RPA9wYZ2oE3DqNibIbDQ== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2cd:202:655d:5fc2:6e4a:1245]) (user=almasrymina job=sendgmr) by 2002:a25:806:: with SMTP id 6mr12586322ybi.411.1620949394883; Thu, 13 May 2021 16:43:14 -0700 (PDT) Date: Thu, 13 May 2021 16:43:09 -0700 Message-Id: <20210513234309.366727-1-almasrymina@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.31.1.751.gd2f1c929bd-goog Subject: [PATCH] mm, hugetlb: fix resv_huge_pages underflow on UFFDIO_COPY From: Mina Almasry Cc: Mina Almasry , Axel Rasmussen , Peter Xu , linux-mm@kvack.org, Mike Kravetz , Andrew Morton , linux-kernel@vger.kernel.org Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b=jHc7PEpa; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf20.hostedemail.com: domain of 3krmdYAsKCIclwxl329xtylrzzrwp.nzxwty58-xxv6lnv.z2r@flex--almasrymina.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=3krmdYAsKCIclwxl329xtylrzzrwp.nzxwty58-xxv6lnv.z2r@flex--almasrymina.bounces.google.com X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: BB1E113A X-Stat-Signature: ccrtgxcmwn3h5koiohxtobfkqooabj8b X-HE-Tag: 1620949394-688928 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When hugetlb_mcopy_atomic_pte() is called with: - mode==MCOPY_ATOMIC_NORMAL and, - we already have a page in the page cache corresponding to the associated address, We will allocate a huge page from the reserves, and then fail to insert it into the cache and return -EEXIST. In this case, we need to return -EEXIST without allocating a new page as the page already exists in the cache. Allocating the extra page causes the resv_huge_pages to underflow temporarily until the extra page is freed. To fix this we check if a page exists in the cache, and allocate it and insert it in the cache immediately while holding the lock. After that we copy the contents into the page. As a side effect of this, pages may exist in the cache for which the copy failed and for these pages PageUptodate(page) == false. Modify code that query the cache to handle this correctly. Tested using: ./tools/testing/selftests/vm/userfaultfd hugetlb_shared 1024 200 \ /mnt/huge Test passes, and dmesg shows no underflow warnings. Signed-off-by: Mina Almasry Cc: Axel Rasmussen Cc: Peter Xu Cc: linux-mm@kvack.org Cc: Mike Kravetz Cc: Andrew Morton Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- fs/hugetlbfs/inode.c | 2 +- mm/hugetlb.c | 109 +++++++++++++++++++++++-------------------- 2 files changed, 60 insertions(+), 51 deletions(-) -- 2.31.1.751.gd2f1c929bd-goog diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c index a2a42335e8fd..cc027c335242 100644 --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -346,7 +346,7 @@ static ssize_t hugetlbfs_read_iter(struct kiocb *iocb, struct iov_iter *to) /* Find the page */ page = find_lock_page(mapping, index); - if (unlikely(page == NULL)) { + if (unlikely(page == NULL || !PageUptodate(page))) { /* * We have a HOLE, zero out the user-buffer for the * length of the hole or request. diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 629aa4c2259c..a5a5fbf7ac25 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -4543,7 +4543,7 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, retry: page = find_lock_page(mapping, idx); - if (!page) { + if (!page || !PageUptodate(page)) { /* Check for page in userfault range */ if (userfaultfd_missing(vma)) { ret = hugetlb_handle_userfault(vma, mapping, idx, @@ -4552,26 +4552,30 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, goto out; } - page = alloc_huge_page(vma, haddr, 0); - if (IS_ERR(page)) { - /* - * Returning error will result in faulting task being - * sent SIGBUS. The hugetlb fault mutex prevents two - * tasks from racing to fault in the same page which - * could result in false unable to allocate errors. - * Page migration does not take the fault mutex, but - * does a clear then write of pte's under page table - * lock. Page fault code could race with migration, - * notice the clear pte and try to allocate a page - * here. Before returning error, get ptl and make - * sure there really is no pte entry. - */ - ptl = huge_pte_lock(h, mm, ptep); - ret = 0; - if (huge_pte_none(huge_ptep_get(ptep))) - ret = vmf_error(PTR_ERR(page)); - spin_unlock(ptl); - goto out; + if (!page) { + page = alloc_huge_page(vma, haddr, 0); + if (IS_ERR(page)) { + /* + * Returning error will result in faulting task + * being sent SIGBUS. The hugetlb fault mutex + * prevents two tasks from racing to fault in + * the same page which could result in false + * unable to allocate errors. Page migration + * does not take the fault mutex, but does a + * clear then write of pte's under page table + * lock. Page fault code could race with + * migration, notice the clear pte and try to + * allocate a page here. Before returning + * error, get ptl and make sure there really is + * no pte entry. + */ + ptl = huge_pte_lock(h, mm, ptep); + ret = 0; + if (huge_pte_none(huge_ptep_get(ptep))) + ret = vmf_error(PTR_ERR(page)); + spin_unlock(ptl); + goto out; + } } clear_huge_page(page, address, pages_per_huge_page(h)); __SetPageUptodate(page); @@ -4868,31 +4872,55 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm, struct page **pagep) { bool is_continue = (mode == MCOPY_ATOMIC_CONTINUE); - struct address_space *mapping; - pgoff_t idx; + struct hstate *h = hstate_vma(dst_vma); + struct address_space *mapping = dst_vma->vm_file->f_mapping; + pgoff_t idx = vma_hugecache_offset(h, dst_vma, dst_addr); unsigned long size; int vm_shared = dst_vma->vm_flags & VM_SHARED; - struct hstate *h = hstate_vma(dst_vma); pte_t _dst_pte; spinlock_t *ptl; - int ret; + int ret = -ENOMEM; struct page *page; int writable; - mapping = dst_vma->vm_file->f_mapping; - idx = vma_hugecache_offset(h, dst_vma, dst_addr); - if (is_continue) { ret = -EFAULT; - page = find_lock_page(mapping, idx); - if (!page) + page = hugetlbfs_pagecache_page(h, dst_vma, dst_addr); + if (!page) { + ret = -ENOMEM; goto out; + } } else if (!*pagep) { - ret = -ENOMEM; + /* If a page already exists, then it's UFFDIO_COPY for + * a non-missing case. Return -EEXIST. + */ + if (hugetlbfs_pagecache_present(h, dst_vma, dst_addr)) { + ret = -EEXIST; + goto out; + } + page = alloc_huge_page(dst_vma, dst_addr, 0); if (IS_ERR(page)) goto out; + /* Add shared, newly allocated pages to the page cache. */ + if (vm_shared) { + size = i_size_read(mapping->host) >> huge_page_shift(h); + ret = -EFAULT; + if (idx >= size) + goto out; + + /* + * Serialization between remove_inode_hugepages() and + * huge_add_to_page_cache() below happens through the + * hugetlb_fault_mutex_table that here must be hold by + * the caller. + */ + ret = huge_add_to_page_cache(page, mapping, idx); + if (ret) + goto out; + } + ret = copy_huge_page_from_user(page, (const void __user *) src_addr, pages_per_huge_page(h), false); @@ -4916,24 +4944,6 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm, */ __SetPageUptodate(page); - /* Add shared, newly allocated pages to the page cache. */ - if (vm_shared && !is_continue) { - size = i_size_read(mapping->host) >> huge_page_shift(h); - ret = -EFAULT; - if (idx >= size) - goto out_release_nounlock; - - /* - * Serialization between remove_inode_hugepages() and - * huge_add_to_page_cache() below happens through the - * hugetlb_fault_mutex_table that here must be hold by - * the caller. - */ - ret = huge_add_to_page_cache(page, mapping, idx); - if (ret) - goto out_release_nounlock; - } - ptl = huge_pte_lockptr(h, dst_mm, dst_pte); spin_lock(ptl); @@ -4994,7 +5004,6 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm, spin_unlock(ptl); if (vm_shared || is_continue) unlock_page(page); -out_release_nounlock: put_page(page); goto out; }