From patchwork Mon May 3 18:07:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 12236733 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E703C433ED for ; Mon, 3 May 2021 18:08:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 10A9D613B3 for ; Mon, 3 May 2021 18:08:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232024AbhECSJB (ORCPT ); Mon, 3 May 2021 14:09:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42666 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231790AbhECSIt (ORCPT ); Mon, 3 May 2021 14:08:49 -0400 Received: from mail-qt1-x849.google.com (mail-qt1-x849.google.com [IPv6:2607:f8b0:4864:20::849]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 606D9C06138F for ; Mon, 3 May 2021 11:07:49 -0700 (PDT) Received: by mail-qt1-x849.google.com with SMTP id 1-20020aed31010000b029019d1c685840so1962828qtg.3 for ; Mon, 03 May 2021 11:07:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=B2Y1PjHbI7aJoqQQgo4Vnp4m6n0XR/I22GG3IBxep44=; b=gTsFZvf1qsTwVhu74lp9LkD4evRddnT0OUCq2nktgrb7Qeqfhtqry+E72YR++T84jj FXO/yrAwTZLbPQiAAwfvf+aY5r3B+UVXp4W+z4eCVdecIu2KlkEWmJ+eEDTsnajop9KT hod3LhJUwoDlsuDkjH6nvfEyBxo2prHq/2niF4ikBsp1fVCf7ubjwJI2Sbd/I7z+CzoO 08wNutl5oqSAMliZ/wvffe/EPSYnMJoN93ATlRsVdEBqx0kc4GxVW4CfIU1H4OANm343 BG/ERYek+C20/S9UsylvbaAOgQSKEWJUxx/0wKow/xe2GzjB7U3gDEfnlKV66ErHLrQc 8/Fg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=B2Y1PjHbI7aJoqQQgo4Vnp4m6n0XR/I22GG3IBxep44=; b=FAMqr7h3KCaMnbhWwcGAREfzYGDGge8dVKkosMrhrp+pAsmfs0LtPkvyTxTDh4mnUL 55GNI2fnWY+JQDfcjKLWKl3oMAlk335yePY25gMATdCFqrVXQjqoeVA2JlP5jNYWEhUQ oeT0NsJrO7QBKa2xMNawaaNlhpkt3QD1H7YDBk9duoGaf7SgHyNHtH8iIelWzNzh8XZs Z9F76JuoDevzupXn8JX5h/qyIq6bsKrj0jmA02HL8/Jm/5G2yOzM5Dkknlf5OM3n9lEX Kv+CQop77QGbI3tSmrX6RbC52IN2dTQYWW69yBC27ad+rSjCoSCbJtuXS42WSr2HYtxa 66nw== X-Gm-Message-State: AOAM53311rMOaEQnrxH2hiXwg+Huv5+3cur8wDaXcO5o15VBurVMOtsF LnctQWw56a4ZGCGemuhqGiUm6acw+X3eFFHJtQpd X-Google-Smtp-Source: ABdhPJxnNCpEjYHIgZNcvbAUN1MjTKtSfxHgbzP+NuN4wp02yOZHfaGQB6Fjg9bx44bW+nF3M5O2miOL9kS1DE+zUYFi X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:3d79:e69a:a4f9:ef0]) (user=axelrasmussen job=sendgmr) by 2002:a0c:c446:: with SMTP id t6mr11124075qvi.3.1620065268499; Mon, 03 May 2021 11:07:48 -0700 (PDT) Date: Mon, 3 May 2021 11:07:30 -0700 In-Reply-To: <20210503180737.2487560-1-axelrasmussen@google.com> Message-Id: <20210503180737.2487560-4-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210503180737.2487560-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.31.1.527.g47e6f16901-goog Subject: [PATCH v6 03/10] userfaultfd/shmem: support minor fault registration for shmem From: Axel Rasmussen To: Alexander Viro , Andrea Arcangeli , Andrew Morton , Hugh Dickins , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Kravetz , Mike Rapoport , Peter Xu , Shaohua Li , Shuah Khan , Stephen Rothwell , Wang Qing Cc: linux-api@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, Axel Rasmussen , Brian Geffon , "Dr . David Alan Gilbert" , Mina Almasry , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This patch allows shmem-backed VMAs to be registered for minor faults. Minor faults are appropriately relayed to userspace in the fault path, for VMAs with the relevant flag. This commit doesn't hook up the UFFDIO_CONTINUE ioctl for shmem-backed minor faults, though, so userspace doesn't yet have a way to resolve such faults. Because of this, we also don't yet advertise this as a supported feature. That will be done in a separate commit when the feature is fully implemented. Acked-by: Peter Xu Acked-by: Hugh Dickins Signed-off-by: Axel Rasmussen --- fs/userfaultfd.c | 3 +-- mm/memory.c | 8 +++++--- mm/shmem.c | 12 +++++++++++- 3 files changed, 17 insertions(+), 6 deletions(-) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 14f92285d04f..468556fb04a9 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -1267,8 +1267,7 @@ static inline bool vma_can_userfault(struct vm_area_struct *vma, } if (vm_flags & VM_UFFD_MINOR) { - /* FIXME: Add minor fault interception for shmem. */ - if (!is_vm_hugetlb_page(vma)) + if (!(is_vm_hugetlb_page(vma) || vma_is_shmem(vma))) return false; } diff --git a/mm/memory.c b/mm/memory.c index 86ba6c1f6821..9a536cfde7c8 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3972,9 +3972,11 @@ static vm_fault_t do_read_fault(struct vm_fault *vmf) * something). */ if (vma->vm_ops->map_pages && fault_around_bytes >> PAGE_SHIFT > 1) { - ret = do_fault_around(vmf); - if (ret) - return ret; + if (likely(!userfaultfd_minor(vmf->vma))) { + ret = do_fault_around(vmf); + if (ret) + return ret; + } } ret = __do_fault(vmf); diff --git a/mm/shmem.c b/mm/shmem.c index 04de845b50b3..e361f1d81c8d 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1785,7 +1785,7 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index, * vm. If we swap it in we mark it dirty since we also free the swap * entry since a page cannot live in both the swap and page cache. * - * vmf and fault_type are only supplied by shmem_fault: + * vma, vmf, and fault_type are only supplied by shmem_fault: * otherwise they are NULL. */ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, @@ -1820,6 +1820,16 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, page = pagecache_get_page(mapping, index, FGP_ENTRY | FGP_HEAD | FGP_LOCK, 0); + + if (page && vma && userfaultfd_minor(vma)) { + if (!xa_is_value(page)) { + unlock_page(page); + put_page(page); + } + *fault_type = handle_userfault(vmf, VM_UFFD_MINOR); + return 0; + } + if (xa_is_value(page)) { error = shmem_swapin_page(inode, index, &page, sgp, gfp, vma, fault_type);