From patchwork Thu Mar 21 22:07:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13599438 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A6F1EC6FD1F for ; Thu, 21 Mar 2024 22:10:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=buUekmyP1ORhLJ0yhWKax++Y+pzOwrNHijceCs/2MVU=; b=0PrXwRFz2thML4 Rd0gdmCcgQgFdgkgSm8iZycuimSPyveSq2H+bK87XUa6O0SA0WZ0cSXHN/ViX3V0qk9EV/DctIpKu twvlyckeEqkMLLjeqIZMV9l4VBfU9PQFqNzxUtYwNGSLRv2+x4Ozo0dZvTCJqTX+gbIh6DiJsSZqb lCSPTIwYvQ/fPBCvr6RLo2lol+Ky8GtdjzVCn341aAqVm3ZIuAx/xXm94yJJLxTUGIbEKqICyndIV 681IOHymr5OgQctZUgH3UU8HqufN5aehNmMpuQEN7DQuipEFHlw1zGprHBieioD+2PU8d+xeUKzer t1BxP+nhNDLm+n3H8elA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rnQc8-00000004rmZ-20dP; Thu, 21 Mar 2024 22:10:20 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rnQaG-00000004qSQ-0R8I for linux-riscv@lists.infradead.org; Thu, 21 Mar 2024 22:08:29 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711058903; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=U+6dxwHwA9JOroUFOiyHgWbjpU2ZpFB4CR6YlbtDU08=; b=L5b2zZqw83lTFjcAAV0+mINOwkPlN3+SGs569sETtf0Pweq9rdG7bK+pwZoZYYbRTgTHCS p7f16rbOQ7rxC1Tuss6fIFflnCLfqA+PZcXtKmUkKbzuB/HXubDzES2KI7FiKv9rC+8Z+d n/WHpOBprRbefeCxrqwJBf/I/mvT5tU= Received: from mail-qv1-f69.google.com (mail-qv1-f69.google.com [209.85.219.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-647-2Iq9AuaDNt6nhaAWhrcNjA-1; Thu, 21 Mar 2024 18:08:19 -0400 X-MC-Unique: 2Iq9AuaDNt6nhaAWhrcNjA-1 Received: by mail-qv1-f69.google.com with SMTP id 6a1803df08f44-6961a54234cso2674336d6.1 for ; Thu, 21 Mar 2024 15:08:19 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711058899; x=1711663699; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=U+6dxwHwA9JOroUFOiyHgWbjpU2ZpFB4CR6YlbtDU08=; b=a05dcXkkdTZ7LesFPVFPZe0Euk+oEnUB8+Geit1YwGYhKQFdNye4p47Fhc57O8BWZl CMotXubCCzbToIjmBqfxwssvL7U399qg5A5wdYBMVShOT/0J3zCp/NDRuLC6J/GRvOZw New/8tXlZwVPqG/AtOS+N3WCKqOwwRrRP0MZt6f/zn7V3lIOCFextjh61+jHuXuCX0kv JYjwKCQ2fJwlMZiVFL21guQevLpMd7J2Yk0aBw61YyVAQlns25RowScoDQGCPLVV0NpY je1kDTT9XCXON3nKqbL0dMIFmNlHODr45l35ApsXVbSn3kUlPVmNl5z8PauguKKXS82p gEgg== X-Forwarded-Encrypted: i=1; AJvYcCV4YJJdmukmU2M0iQv+qyiYf5bcQfcCaqyScYxpfdZIwwqYGiImeGHrJ54DKGdoy3GQc1l68QybD/HfxLO02UhAt0//3feiSBKdb4QpcOuu X-Gm-Message-State: AOJu0Yxwq3tff3Ozlvk0yxLQYgVj8SQMfbLuMvv77elTFDLIf6/BGYG8 OQEaR4e5iA+dzYZv4l9n2IS4GghdhtWOmnJajhiFU2xbyEjdIBiYtO88PNNWCfVteykeReUJmaX dUcEuhZCuPRkj7tw5P5plwxH/3BxXijd2cQVfq2VphXh12ud2V4cWdWqPLfWme70rSg== X-Received: by 2002:a05:6214:3912:b0:691:456f:415a with SMTP id nh18-20020a056214391200b00691456f415amr179393qvb.4.1711058899092; Thu, 21 Mar 2024 15:08:19 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFnkSxmKz4azbUELRR0cRTvXrQr4udCnJcoTFPGMx5l/qOyVfxe6G6Q7JjdIdQ6Zo0lZ2QFGA== X-Received: by 2002:a05:6214:3912:b0:691:456f:415a with SMTP id nh18-20020a056214391200b00691456f415amr179365qvb.4.1711058898616; Thu, 21 Mar 2024 15:08:18 -0700 (PDT) Received: from x1n.redhat.com ([99.254.121.117]) by smtp.gmail.com with ESMTPSA id o6-20020a0562140e4600b00690baf5cde9sm351663qvc.118.2024.03.21.15.08.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Mar 2024 15:08:18 -0700 (PDT) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: linuxppc-dev@lists.ozlabs.org, Michael Ellerman , Christophe Leroy , Matthew Wilcox , Rik van Riel , Lorenzo Stoakes , Axel Rasmussen , peterx@redhat.com, Yang Shi , John Hubbard , linux-arm-kernel@lists.infradead.org, "Kirill A . Shutemov" , Andrew Jones , Vlastimil Babka , Mike Rapoport , Andrew Morton , Muchun Song , Christoph Hellwig , linux-riscv@lists.infradead.org, James Houghton , David Hildenbrand , Jason Gunthorpe , Andrea Arcangeli , "Aneesh Kumar K . V" , Mike Kravetz Subject: [PATCH v3 07/12] mm/gup: Handle hugetlb for no_page_table() Date: Thu, 21 Mar 2024 18:07:57 -0400 Message-ID: <20240321220802.679544-8-peterx@redhat.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240321220802.679544-1-peterx@redhat.com> References: <20240321220802.679544-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240321_150824_482747_99F6CC77 X-CRM114-Status: GOOD ( 14.49 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Peter Xu no_page_table() is not yet used for hugetlb code paths. Make it prepared. The major difference here is hugetlb will return -EFAULT as long as page cache does not exist, even if VM_SHARED. See hugetlb_follow_page_mask(). Pass "address" into no_page_table() too, as hugetlb will need it. Reviewed-by: Christoph Hellwig Reviewed-by: Jason Gunthorpe Signed-off-by: Peter Xu --- mm/gup.c | 44 ++++++++++++++++++++++++++------------------ 1 file changed, 26 insertions(+), 18 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index f3ae8f6ce8a4..943671736080 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -501,19 +501,27 @@ static inline void mm_set_has_pinned_flag(unsigned long *mm_flags) #ifdef CONFIG_MMU static struct page *no_page_table(struct vm_area_struct *vma, - unsigned int flags) + unsigned int flags, unsigned long address) { + if (!(flags & FOLL_DUMP)) + return NULL; + /* - * When core dumping an enormous anonymous area that nobody - * has touched so far, we don't want to allocate unnecessary pages or + * When core dumping, we don't want to allocate unnecessary pages or * page tables. Return error instead of NULL to skip handle_mm_fault, * then get_dump_page() will return NULL to leave a hole in the dump. * But we can only make this optimization where a hole would surely * be zero-filled if handle_mm_fault() actually did handle it. */ - if ((flags & FOLL_DUMP) && - (vma_is_anonymous(vma) || !vma->vm_ops->fault)) + if (is_vm_hugetlb_page(vma)) { + struct hstate *h = hstate_vma(vma); + + if (!hugetlbfs_pagecache_present(h, vma, address)) + return ERR_PTR(-EFAULT); + } else if ((vma_is_anonymous(vma) || !vma->vm_ops->fault)) { return ERR_PTR(-EFAULT); + } + return NULL; } @@ -593,7 +601,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, ptep = pte_offset_map_lock(mm, pmd, address, &ptl); if (!ptep) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); pte = ptep_get(ptep); if (!pte_present(pte)) goto no_page; @@ -685,7 +693,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, pte_unmap_unlock(ptep, ptl); if (!pte_none(pte)) return NULL; - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); } static struct page *follow_pmd_mask(struct vm_area_struct *vma, @@ -701,27 +709,27 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma, pmd = pmd_offset(pudp, address); pmdval = pmdp_get_lockless(pmd); if (pmd_none(pmdval)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); if (!pmd_present(pmdval)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); if (pmd_devmap(pmdval)) { ptl = pmd_lock(mm, pmd); page = follow_devmap_pmd(vma, address, pmd, flags, &ctx->pgmap); spin_unlock(ptl); if (page) return page; - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); } if (likely(!pmd_trans_huge(pmdval))) return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap); if (pmd_protnone(pmdval) && !gup_can_follow_protnone(vma, flags)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); ptl = pmd_lock(mm, pmd); if (unlikely(!pmd_present(*pmd))) { spin_unlock(ptl); - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); } if (unlikely(!pmd_trans_huge(*pmd))) { spin_unlock(ptl); @@ -752,17 +760,17 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma, pud = pud_offset(p4dp, address); if (pud_none(*pud)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); if (pud_devmap(*pud)) { ptl = pud_lock(mm, pud); page = follow_devmap_pud(vma, address, pud, flags, &ctx->pgmap); spin_unlock(ptl); if (page) return page; - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); } if (unlikely(pud_bad(*pud))) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); return follow_pmd_mask(vma, address, pud, flags, ctx); } @@ -777,10 +785,10 @@ static struct page *follow_p4d_mask(struct vm_area_struct *vma, p4dp = p4d_offset(pgdp, address); p4d = READ_ONCE(*p4dp); if (!p4d_present(p4d)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); BUILD_BUG_ON(p4d_leaf(p4d)); if (unlikely(p4d_bad(p4d))) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); return follow_pud_mask(vma, address, p4dp, flags, ctx); } @@ -830,7 +838,7 @@ static struct page *follow_page_mask(struct vm_area_struct *vma, pgd = pgd_offset(mm, address); if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd))) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); return follow_p4d_mask(vma, address, pgd, flags, ctx); }