From patchwork Wed Mar 27 15:23:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13606844 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6C0F1C47DD9 for ; Wed, 27 Mar 2024 15:25:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=OLHRjXB+5qk+bG2Q3Do4C1vPT9ZwQ6HWYmlBCp3LybI=; b=pYNlrRHR/1I5II PDx1ipTo1ckFfDRaMWNH98fjwrS/WS+87E+9jLmCANvx8AHlaAJUxf6yZgctJFousdtcTluRE5R1h IcCtgb3VJWoO5yb7inf0Is/ej8C5jlPkhuO/u3FGIfcYtmAzij5qptlFqUaq+2HLeo0HQXAYKvqga xoFPceczn8ih4onywvkrmR8c5M75Sbkr/YqWuqpt836nSF0Sj4LVRCiiQRhDh4rL0F8o3SRMUm5rw 3sMv5Vj4wUHkCw5CD+NP6u+g19LOgdY5PSWdktN93wVlMO7y/yBj8KWZCPhp7KKqFIomU45l5BWMO VSH2YBHHbLpPOJBo6kSA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV9A-00000009mAB-2dja; Wed, 27 Mar 2024 15:25:00 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV88-00000009lQw-2B4c for linux-riscv@lists.infradead.org; Wed, 27 Mar 2024 15:24:02 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553035; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2/UoYTg8lz4bAl/L4a1cbQCJuXO8WYRlAVPmeKHJK2E=; b=De2WPSLDQBlryuPjCl/FjvTSJQdZboyR9+Z0CJEllaeg7GCX/IxfdKPDsm1xQ6TWFSwe4A q577qKZm5SRfqW1bp4tJ0pDkwCWRoVE3teBMkenAQVJtVWUKCTUeL1nWFGmTBVFMP273dx 5/P8nZNjJV8wHbM/Xw7leV7MJJGuCII= Received: from mail-qk1-f199.google.com (mail-qk1-f199.google.com [209.85.222.199]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-416-r4chMAFtNpazx0-6XlQnKA-1; Wed, 27 Mar 2024 11:23:53 -0400 X-MC-Unique: r4chMAFtNpazx0-6XlQnKA-1 Received: by mail-qk1-f199.google.com with SMTP id af79cd13be357-78a5e62931cso61836485a.0 for ; Wed, 27 Mar 2024 08:23:53 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711553033; x=1712157833; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2/UoYTg8lz4bAl/L4a1cbQCJuXO8WYRlAVPmeKHJK2E=; b=AT401XH8VnQVkllTZXRqdfhUaD7wHhZDr4BtcvDV2XMcEbiNpSAniSqA0pglvYJLi7 H+vCJlAXFXNbVjIqW3x74txDZLjbxmxNPgmZSZAkHS8To1s45wXDDFrE/MblvjJYe4I5 YXlHpgpyWwzSjdkJtHCPnFxxMv4Kt+vlLsorlyxznslsq5HXcYVefAPEklaEguJbBbcp fauwz7onPIvXfuKnvf+J861jAhz/QMp4oXuLUU+d4yUkJDcZI0LRZfCsB0VnlYj5wNIH jNlxOqcj22fH+3TC/MeoZAofi02sC+noIC3rVKaGA1RkQeO2KKiw53d5g1CnurKWaJnf a28Q== X-Forwarded-Encrypted: i=1; AJvYcCWyyzrxUAqUGdb/+qyfSLiZbhbZS4Ecyy4cUhZbcHssytRTcFLJC7OTfRMKdHgnwI3tpWlpG/qxX9lumF1fEnyMikb0RH563uawQBn9oOwy X-Gm-Message-State: AOJu0YyxUSMUrPTtV/r4hGBdVl9MnDmiJdRO31c6ALiMbnkNIlLYr/Ho 3sTH7QyJKO5x2byqM2961hrIPK4wZvTfCXLywIPOUD4W9i74lnfOSyQqcGxGX0kNGtEGA4Ngbkw peCmlpWJ6lf0bC5L7AYaFJfJhLDMD8F+ojvLXgq7Wp3YanXxu8eM0rhMdl7xujePf4w== X-Received: by 2002:a05:6214:4a8a:b0:696:ad00:7deb with SMTP id pi10-20020a0562144a8a00b00696ad007debmr2078347qvb.2.1711553033234; Wed, 27 Mar 2024 08:23:53 -0700 (PDT) X-Google-Smtp-Source: AGHT+IH8Fh1FPXUmWlSE4Ssu3coIMvek7Et1xEVKx+8QTY39hYxqq3mRFRgZ7bz+i1V9Jmd/zYMv6Q== X-Received: by 2002:a05:6214:4a8a:b0:696:ad00:7deb with SMTP id pi10-20020a0562144a8a00b00696ad007debmr2078325qvb.2.1711553032687; Wed, 27 Mar 2024 08:23:52 -0700 (PDT) Received: from x1n.redhat.com ([99.254.121.117]) by smtp.gmail.com with ESMTPSA id hu4-20020a056214234400b00690dd47a41csm6412639qvb.86.2024.03.27.08.23.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Mar 2024 08:23:52 -0700 (PDT) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Yang Shi , "Kirill A . Shutemov" , Mike Kravetz , John Hubbard , Michael Ellerman , peterx@redhat.com, Andrew Jones , Muchun Song , linux-riscv@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, Christophe Leroy , Andrew Morton , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox , Rik van Riel , linux-arm-kernel@lists.infradead.org, Andrea Arcangeli , David Hildenbrand , "Aneesh Kumar K . V" , Vlastimil Babka , James Houghton , Jason Gunthorpe , Mike Rapoport , Axel Rasmussen Subject: [PATCH v4 08/13] mm/gup: Handle hugetlb for no_page_table() Date: Wed, 27 Mar 2024 11:23:27 -0400 Message-ID: <20240327152332.950956-9-peterx@redhat.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240327152332.950956-1-peterx@redhat.com> References: <20240327152332.950956-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240327_082357_082365_B1AE8EA7 X-CRM114-Status: GOOD ( 14.38 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Peter Xu no_page_table() is not yet used for hugetlb code paths. Make it prepared. The major difference here is hugetlb will return -EFAULT as long as page cache does not exist, even if VM_SHARED. See hugetlb_follow_page_mask(). Pass "address" into no_page_table() too, as hugetlb will need it. Reviewed-by: Christoph Hellwig Reviewed-by: Jason Gunthorpe Signed-off-by: Peter Xu --- mm/gup.c | 44 ++++++++++++++++++++++++++------------------ 1 file changed, 26 insertions(+), 18 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index c2881772216b..ef46a7053e16 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -501,19 +501,27 @@ static inline void mm_set_has_pinned_flag(unsigned long *mm_flags) #ifdef CONFIG_MMU static struct page *no_page_table(struct vm_area_struct *vma, - unsigned int flags) + unsigned int flags, unsigned long address) { + if (!(flags & FOLL_DUMP)) + return NULL; + /* - * When core dumping an enormous anonymous area that nobody - * has touched so far, we don't want to allocate unnecessary pages or + * When core dumping, we don't want to allocate unnecessary pages or * page tables. Return error instead of NULL to skip handle_mm_fault, * then get_dump_page() will return NULL to leave a hole in the dump. * But we can only make this optimization where a hole would surely * be zero-filled if handle_mm_fault() actually did handle it. */ - if ((flags & FOLL_DUMP) && - (vma_is_anonymous(vma) || !vma->vm_ops->fault)) + if (is_vm_hugetlb_page(vma)) { + struct hstate *h = hstate_vma(vma); + + if (!hugetlbfs_pagecache_present(h, vma, address)) + return ERR_PTR(-EFAULT); + } else if ((vma_is_anonymous(vma) || !vma->vm_ops->fault)) { return ERR_PTR(-EFAULT); + } + return NULL; } @@ -593,7 +601,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, ptep = pte_offset_map_lock(mm, pmd, address, &ptl); if (!ptep) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); pte = ptep_get(ptep); if (!pte_present(pte)) goto no_page; @@ -685,7 +693,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, pte_unmap_unlock(ptep, ptl); if (!pte_none(pte)) return NULL; - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); } static struct page *follow_pmd_mask(struct vm_area_struct *vma, @@ -701,27 +709,27 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma, pmd = pmd_offset(pudp, address); pmdval = pmdp_get_lockless(pmd); if (pmd_none(pmdval)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); if (!pmd_present(pmdval)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); if (pmd_devmap(pmdval)) { ptl = pmd_lock(mm, pmd); page = follow_devmap_pmd(vma, address, pmd, flags, &ctx->pgmap); spin_unlock(ptl); if (page) return page; - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); } if (likely(!pmd_trans_huge(pmdval))) return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap); if (pmd_protnone(pmdval) && !gup_can_follow_protnone(vma, flags)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); ptl = pmd_lock(mm, pmd); if (unlikely(!pmd_present(*pmd))) { spin_unlock(ptl); - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); } if (unlikely(!pmd_trans_huge(*pmd))) { spin_unlock(ptl); @@ -752,17 +760,17 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma, pud = pud_offset(p4dp, address); if (pud_none(*pud)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); if (pud_devmap(*pud)) { ptl = pud_lock(mm, pud); page = follow_devmap_pud(vma, address, pud, flags, &ctx->pgmap); spin_unlock(ptl); if (page) return page; - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); } if (unlikely(pud_bad(*pud))) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); return follow_pmd_mask(vma, address, pud, flags, ctx); } @@ -777,10 +785,10 @@ static struct page *follow_p4d_mask(struct vm_area_struct *vma, p4dp = p4d_offset(pgdp, address); p4d = READ_ONCE(*p4dp); if (!p4d_present(p4d)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); BUILD_BUG_ON(p4d_leaf(p4d)); if (unlikely(p4d_bad(p4d))) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); return follow_pud_mask(vma, address, p4dp, flags, ctx); } @@ -830,7 +838,7 @@ static struct page *follow_page_mask(struct vm_area_struct *vma, pgd = pgd_offset(mm, address); if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd))) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); return follow_p4d_mask(vma, address, pgd, flags, ctx); }