From patchwork Fri Feb 12 21:54:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 12086147 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E1BDC433DB for ; Fri, 12 Feb 2021 21:56:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4F2DB64E8A for ; Fri, 12 Feb 2021 21:56:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231256AbhBLV4A (ORCPT ); Fri, 12 Feb 2021 16:56:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48764 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230471AbhBLVzi (ORCPT ); Fri, 12 Feb 2021 16:55:38 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4692CC061793 for ; Fri, 12 Feb 2021 13:54:25 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id o8so701766pls.7 for ; Fri, 12 Feb 2021 13:54:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=LiVtqkhHz/r4Zeb2EoTFGJ4n9HyxABq0yJQZLNwm2Q8=; b=O7vBRrIx5Dz4nWqloIt5jjd55w0xsLnlt0ZDfNUqgpCSKytSZeYWl0eBlXJzeWpeRO mscnUD5ciEeVnFWj+vELioxHhbcdFk8ZcE4meUV7GbG7ACQ9UbH9iHAy7dTBR40KHQN5 P0hsB2ltoSujvseIWy6uirjIjIg4naFd/PpA4y8/OsQjCl3eTn+PF4YA7JYYQgBZNuWd +fufUKzyhCpmV5A54qOsXjtQjiaEHdyjh5wo/oIisoos1/fGTY6VrO+T/UrFGDo4JKnK tQPUFUk28KtNK14EG6HH21RO5ZAJLF4s/Zb1WVduHQCLGjGSRkFjoI20fT39Bjs8h2ci 2C7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=LiVtqkhHz/r4Zeb2EoTFGJ4n9HyxABq0yJQZLNwm2Q8=; b=rcCbPLf2BNSYdhSHt/aNKn0+GGBF15Y/Qjpz9CjSezWHNRiQ/sqBJ0k5ZA2q2CBCiF R1bnQ0mWWBCnk95TPupHSNTNxH+1FcikOIGoePKY5rgRSMjeWEirSRKOlgMX09OuSZ6N 5nkpGPvfQCYAXO2Fo7O2N+ztDowndLLTCp+N78O9RD+VkNspxuGd3f2Nwy7Yi5pbT218 JFOpJ/ElcOtrWjgzhPzAJe0RWA7/5I8VG3GoUh0sdO7SZLHMlGlf9aKMLf8ErjYBamtw ivm7K2Em2Px2FERmd9jG8gqXTBRZjtMHjzM+hbQU19PALGUJo5CQo3bY5VIiXmkq7mma qBJQ== X-Gm-Message-State: AOAM531kkKjCJnZJYNJiXxwBBbDIZrJfRtOmsGYcr59QIc4ZKjxHTAfA sZuGgeXTQf/qTwntZRL4zV2WHKN/d9Gemi5AnFhB X-Google-Smtp-Source: ABdhPJwIyWsSZkVJMhVY+zxGWSXQg/fBoMBEYQ3+treCTWr7uZ5zN8pWhsGxAM08+YjFkTJSWMtS/rr2O9Evf+0i2xFG Sender: "axelrasmussen via sendgmr" X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:d2f:99bb:c1e0:34ba]) (user=axelrasmussen job=sendgmr) by 2002:a17:90a:8c18:: with SMTP id a24mr2543745pjo.218.1613166864656; Fri, 12 Feb 2021 13:54:24 -0800 (PST) Date: Fri, 12 Feb 2021 13:54:00 -0800 In-Reply-To: <20210212215403.3457686-1-axelrasmussen@google.com> Message-Id: <20210212215403.3457686-5-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210212215403.3457686-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.30.0.478.g8a0d178c01-goog Subject: [PATCH v6 4/7] userfaultfd: hugetlbfs: only compile UFFD helpers if config enabled From: Axel Rasmussen To: Alexander Viro , Alexey Dobriyan , Andrea Arcangeli , Andrew Morton , Anshuman Khandual , Catalin Marinas , Chinwen Chang , Huang Ying , Ingo Molnar , Jann Horn , Jerome Glisse , Lokesh Gidra , "Matthew Wilcox (Oracle)" , Michael Ellerman , " =?utf-8?q?Michal_Koutn=C3=BD?= " , Michel Lespinasse , Mike Kravetz , Mike Rapoport , Nicholas Piggin , Peter Xu , Shaohua Li , Shawn Anastasio , Steven Rostedt , Steven Price , Vlastimil Babka Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Adam Ruprecht , Axel Rasmussen , Cannon Matthews , "Dr . David Alan Gilbert" , David Rientjes , Mina Almasry , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org For background, mm/userfaultfd.c provides a general mcopy_atomic implementation. But some types of memory (i.e., hugetlb and shmem) need a slightly different implementation, so they provide their own helpers for this. In other words, userfaultfd is the only caller of these functions. This patch achieves two things: 1. Don't spend time compiling code which will end up never being referenced anyway (a small build time optimization). 2. In patches later in this series, we extend the signature of these helpers with UFFD-specific state (a mode enumeration). Once this happens, we *have to* either not compile the helpers, or unconditionally define the UFFD-only state (which seems messier to me). This includes the declarations in the headers, as otherwise they'd yield warnings about implicitly defining the type of those arguments. Reviewed-by: Mike Kravetz Reviewed-by: Peter Xu Signed-off-by: Axel Rasmussen --- include/linux/hugetlb.h | 4 ++++ mm/hugetlb.c | 2 ++ 2 files changed, 6 insertions(+) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index d740c6fd19ae..aa9e1d6de831 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -134,11 +134,13 @@ void hugetlb_show_meminfo(void); unsigned long hugetlb_total_pages(void); vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long address, unsigned int flags); +#ifdef CONFIG_USERFAULTFD int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm, pte_t *dst_pte, struct vm_area_struct *dst_vma, unsigned long dst_addr, unsigned long src_addr, struct page **pagep); +#endif /* CONFIG_USERFAULTFD */ bool hugetlb_reserve_pages(struct inode *inode, long from, long to, struct vm_area_struct *vma, vm_flags_t vm_flags); @@ -309,6 +311,7 @@ static inline void hugetlb_free_pgd_range(struct mmu_gather *tlb, BUG(); } +#ifdef CONFIG_USERFAULTFD static inline int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm, pte_t *dst_pte, struct vm_area_struct *dst_vma, @@ -319,6 +322,7 @@ static inline int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm, BUG(); return 0; } +#endif /* CONFIG_USERFAULTFD */ static inline pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr, unsigned long sz) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 93307fb058b7..37b9ff7c2d04 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -4638,6 +4638,7 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, return ret; } +#ifdef CONFIG_USERFAULTFD /* * Used by userfaultfd UFFDIO_COPY. Based on mcopy_atomic_pte with * modifications for huge pages. @@ -4768,6 +4769,7 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm, put_page(page); goto out; } +#endif /* CONFIG_USERFAULTFD */ static void record_subpages_vmas(struct page *page, struct vm_area_struct *vma, int refs, struct page **pages,