From patchwork Thu Mar 21 22:08:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13599414 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 71EE4C6FD1F for ; Thu, 21 Mar 2024 22:08:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A3F8A6B00A4; Thu, 21 Mar 2024 18:08:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9CAEF6B00A5; Thu, 21 Mar 2024 18:08:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7CAD36B00A6; Thu, 21 Mar 2024 18:08:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 682296B00A4 for ; Thu, 21 Mar 2024 18:08:32 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 0BD00140ABA for ; Thu, 21 Mar 2024 22:08:32 +0000 (UTC) X-FDA: 81922436064.13.894E505 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf08.hostedemail.com (Postfix) with ESMTP id E6EE6160019 for ; Thu, 21 Mar 2024 22:08:29 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=gfW2DZiA; spf=pass (imf08.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711058910; a=rsa-sha256; cv=none; b=8WHsrsKIlwVwV+2ICvjP8N8Qrx+g03URETLUFk8yDaCgLHQutxcjOYE4PRkSzcJNxJJl6C dphzpaSJp/uu5dYJgHKVEr3z3PqGs6AqhwQYO3GHSKhmis1e7uB5PFANdJMfz5RP1vpAiM eXcq6XHbfC2vGSUPjig3nHQ29JxRuik= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=gfW2DZiA; spf=pass (imf08.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711058910; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=o0KbP6b9RlV02LlNjzHhHyMPZ1vhd/KMB6w1UdD4tH4=; b=lb4fC+PusKAhMlHOhOqZueud6kEzV+0KmiqKzxdJ8Lq1b14Yw48U4e1ACn8p4PbavoZdB1 zU6HZ+IbQhASkAYzN5bjnsivCQUZw5LK+SSmCLS6ad1Sim2tUtM37ediWL6KDOI6nHx2qt 53H5KIN/1/MoqvImIo2kO06xmcebQ3o= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711058909; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=o0KbP6b9RlV02LlNjzHhHyMPZ1vhd/KMB6w1UdD4tH4=; b=gfW2DZiAYEDL1n0+OxC9iZ/ZLiiZT41o+MizICXgaq7HLOVhTcJYayO4UxEqDdE6p+OoNh qpsQHk10vh/Uz4i7A3wU4hmF7Nrqp2rAcd62omd3+6STDr/3i8AMNNLKggrRzyq8zCeWdC xYu/Ldf8XeVEyD9KPMGoKRtWxOITPKY= Received: from mail-oa1-f69.google.com (mail-oa1-f69.google.com [209.85.160.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-224-ARKER-pvM_yslQgO7VkvBQ-1; Thu, 21 Mar 2024 18:08:27 -0400 X-MC-Unique: ARKER-pvM_yslQgO7VkvBQ-1 Received: by mail-oa1-f69.google.com with SMTP id 586e51a60fabf-222b55202f4so525973fac.1 for ; Thu, 21 Mar 2024 15:08:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711058907; x=1711663707; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=o0KbP6b9RlV02LlNjzHhHyMPZ1vhd/KMB6w1UdD4tH4=; b=i8jxlm8McuPskEu701YcKpNY6j7n1dCF9WXHAESitwieo78ut9Cr032piFlE24Flmi bj70rh4qE5SnM+eYjG4iGzvbEm8RHP0/e0xvbW+i0mSs5ix0djZHyurhjQqAeMCmrm78 2BCvd2hoibYz7xY8OqJxVO0upMd4s3ROl9DVogyJml3bm02bZkLo7pfKUZXzKcpt6r4R Q0tY2XwwRTc9iF/dLsOs3Dgbba1i2rf7cDrLAFJrjbKbNvLUqkJ0YJDIfFzDcAb0xFav 1FGwI9/gLE++VRd0fYLBz4oQAGXzJG/5HfBz1U4tG426BfCFZCAZpJ5q4eF7DUJpIBTY mMMw== X-Gm-Message-State: AOJu0YzYPFCiypRgEfdieR5fFYC96ZuuaJtk/5cRdKpWjbix7i1j8hsf beT+LsGqcly7ypu7iPc1fJVqyM02vWW8Qc4SEYliMQ0Ql8+JLXlA0OtbwGibOcyMDCWxyKOAsTT DwCoHtp1xBdESo1jMH2Ha2bdxOoKk9aC76eoUBNf5pw6y1whyqxE9ioNB5MUKq0Fy7id7G49Vil eKfwW86l4pbCixmi+gjdkGQ9n5S22b9A== X-Received: by 2002:a05:6870:4189:b0:229:8575:875b with SMTP id y9-20020a056870418900b002298575875bmr520330oac.5.1711058906840; Thu, 21 Mar 2024 15:08:26 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHyRVGIH/+XEvquuQvEBSJicroe5v4UAFWGp7nk+sM4AQMJKL66g328LmVlireG12paxfzEyQ== X-Received: by 2002:a05:6870:4189:b0:229:8575:875b with SMTP id y9-20020a056870418900b002298575875bmr520279oac.5.1711058906111; Thu, 21 Mar 2024 15:08:26 -0700 (PDT) Received: from x1n.redhat.com ([99.254.121.117]) by smtp.gmail.com with ESMTPSA id o6-20020a0562140e4600b00690baf5cde9sm351663qvc.118.2024.03.21.15.08.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Mar 2024 15:08:25 -0700 (PDT) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: linuxppc-dev@lists.ozlabs.org, Michael Ellerman , Christophe Leroy , Matthew Wilcox , Rik van Riel , Lorenzo Stoakes , Axel Rasmussen , peterx@redhat.com, Yang Shi , John Hubbard , linux-arm-kernel@lists.infradead.org, "Kirill A . Shutemov" , Andrew Jones , Vlastimil Babka , Mike Rapoport , Andrew Morton , Muchun Song , Christoph Hellwig , linux-riscv@lists.infradead.org, James Houghton , David Hildenbrand , Jason Gunthorpe , Andrea Arcangeli , "Aneesh Kumar K . V" , Mike Kravetz Subject: [PATCH v3 11/12] mm/gup: Handle hugepd for follow_page() Date: Thu, 21 Mar 2024 18:08:01 -0400 Message-ID: <20240321220802.679544-12-peterx@redhat.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240321220802.679544-1-peterx@redhat.com> References: <20240321220802.679544-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: E6EE6160019 X-Stat-Signature: tukit41ucw65ygi9qytzazu6s15pq7zg X-Rspam-User: X-HE-Tag: 1711058909-486808 X-HE-Meta: U2FsdGVkX192hA1keiEWktiWdcJegfKWEMSsNgfgXZldmHtZuaYk+/218FHqTWfBymjtR80NLrCTxZHP2oSscqaTno9d0rfx9hgZi6Z/xQxyvGHAAkgc1kO/pFQA1TD9O+YTmZFGF9KPbqnzFe2Qs0MhHoHC9IMHFvlQqq7kfyCV+j+HbllMHBZboFROKSZeHMvBlLZVmbTf0ZKwt0kWNyRvUezsM6lL780x9c25KyvfLIL1YuZWSoVjItmP74JR2li5zhW8ElbcXDNSDZoSZwB0JzNFoarE0JXpkHG7OxTe9Ybng+C/H/Vgc5M4Vql8A3khbYCPuB2aAtupIw07OFcSVt+ExNbppatCMili4NqKClOg7FBzXP4YC8spRjmPZKvwXVubZ4cAHagOeAi8XlmdoYB9BZ3+dr7rr9XepngbPkP5XbkhDpGr9oxX8KP0dJDXWY9toHTuiAsvIzTW8zyAfPBzC6EL0TZdskB8MSgt4+/IcMNrMDpzCnR4CnMOkezFyDmS0HVOkCIN6VCQBsB0dDoRHDbweJRH/3TZ5yi+4sUncVzZNlDOTKrLcATrMD2+XwwQMI9ZIdFARbVaxNAUgk/yHGDv/MeAaKjvVfA6g3Bm3w7noxDEywM2fDmQ73idrrVm52oDF9GHjpZa56qOxlY7axbfWxuXG/pqleHdvbX5Z1Jm9KlrqNKEBXS4MmIANg8wx2cuyClnOfHceLRYuXms+9HyiTOgGACjdwF0ua/4UQP5Nvyu2tM/FmkS8XQxKqzZncCdvfns8PLNREB8b/bqWJqfGO+XPr6yB5xPu1VTOYqOA23F6FsMsRSsLtBRuF4NWULPo6tbUZOQHtaddYAOcALJdeYHk8LLVFjY0P5d5ansRSHclEAcajf/dU3wl8CdeRqBe8Vp3P186DbplHulU1/ABR60he/WzSjsnDmWldSKbxfxPoHl23fF0dN5T1qwqGwRWXEjyMM aUVvd3Nn ga40uy+yMVH7UqKcAbvEPOBIybfv0hYLqM5pYsnZEhoZRGrvqx9j5RxBLe/T4c63L8yeoZ7L++UXnODevMT1AkBfHabrnFXk53KLfSaOrS/SB0T71YxDMnMyj7owmSBeBlg5ypanjOBptP9sutfW6N42/O+8ggw+nhBgk/W3p/0ElhBqfLQlvxdz/EqLAx5Yg2C9E5YDzOVpNKYYn8Ud/9qdz0t5NgGiYEk0u2Sp4jHY04+N7TtlL9qYuYIx2TZySqh5xi21kdubs0+Fx6naUc3QfLpzjyw1wAaJZqW4Iqx/ZVMb5xN0SgPuXSGOfipvRprbFAE1rde/o9g19vUjXwIN0LgCYKhtz9ALcQ+q1A9IeqUhAvZH0X9/ot3Xh4LJcpcgrtES2TUSOETjC77PAGg5eyQtJueq3190vY2JE2xvWNPcMtrXYZNP6MQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Peter Xu Hugepd is only used in PowerPC so far on 4K page size kernels where hash mmu is used. follow_page_mask() used to leverage hugetlb APIs to access hugepd entries. Teach follow_page_mask() itself on hugepd. With previous refactors on fast-gup gup_huge_pd(), most of the code can be easily leveraged. There's something not needed for follow page, for example, gup_hugepte() tries to detect pgtable entry change which will never happen with slow gup (which has the pgtable lock held), but that's not a problem to check. Since follow_page() always only fetch one page, set the end to "address + PAGE_SIZE" should suffice. We will still do the pgtable walk once for each hugetlb page by setting ctx->page_mask properly. One thing worth mentioning is that some level of pgtable's _bad() helper will report is_hugepd() entries as TRUE on Power8 hash MMUs. I think it at least applies to PUD on Power8 with 4K pgsize. It means feeding a hugepd entry to pud_bad() will report a false positive. Let's leave that for now because it can be arch-specific where I am a bit declined to touch. In this patch it's not a problem as long as hugepd is detected before any bad pgtable entries. Signed-off-by: Peter Xu --- mm/gup.c | 73 ++++++++++++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 66 insertions(+), 7 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 00cdf4cb0cd4..43a2e0a203cd 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -30,6 +30,11 @@ struct follow_page_context { unsigned int page_mask; }; +static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hugepd, + unsigned long addr, unsigned int pdshift, + unsigned int flags, + struct follow_page_context *ctx); + static inline void sanity_check_pinned_pages(struct page **pages, unsigned long npages) { @@ -871,6 +876,9 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma, return no_page_table(vma, flags, address); if (!pmd_present(pmdval)) return no_page_table(vma, flags, address); + if (unlikely(is_hugepd(__hugepd(pmd_val(pmdval))))) + return follow_hugepd(vma, __hugepd(pmd_val(pmdval)), + address, PMD_SHIFT, flags, ctx); if (pmd_devmap(pmdval)) { ptl = pmd_lock(mm, pmd); page = follow_devmap_pmd(vma, address, pmd, flags, &ctx->pgmap); @@ -921,6 +929,9 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma, pud = READ_ONCE(*pudp); if (!pud_present(pud)) return no_page_table(vma, flags, address); + if (unlikely(is_hugepd(__hugepd(pud_val(pud))))) + return follow_hugepd(vma, __hugepd(pud_val(pud)), + address, PUD_SHIFT, flags, ctx); if (pud_leaf(pud)) { ptl = pud_lock(mm, pudp); page = follow_huge_pud(vma, address, pudp, flags, ctx); @@ -944,10 +955,13 @@ static struct page *follow_p4d_mask(struct vm_area_struct *vma, p4dp = p4d_offset(pgdp, address); p4d = READ_ONCE(*p4dp); - if (!p4d_present(p4d)) - return no_page_table(vma, flags, address); BUILD_BUG_ON(p4d_leaf(p4d)); - if (unlikely(p4d_bad(p4d))) + + if (unlikely(is_hugepd(__hugepd(p4d_val(p4d))))) + return follow_hugepd(vma, __hugepd(p4d_val(p4d)), + address, P4D_SHIFT, flags, ctx); + + if (!p4d_present(p4d) || p4d_bad(p4d)) return no_page_table(vma, flags, address); return follow_pud_mask(vma, address, p4dp, flags, ctx); @@ -981,7 +995,7 @@ static struct page *follow_page_mask(struct vm_area_struct *vma, unsigned long address, unsigned int flags, struct follow_page_context *ctx) { - pgd_t *pgd; + pgd_t *pgd, pgdval; struct mm_struct *mm = vma->vm_mm; ctx->page_mask = 0; @@ -996,11 +1010,17 @@ static struct page *follow_page_mask(struct vm_area_struct *vma, &ctx->page_mask); pgd = pgd_offset(mm, address); + pgdval = *pgd; - if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd))) - return no_page_table(vma, flags, address); + if (unlikely(is_hugepd(__hugepd(pgd_val(pgdval))))) + page = follow_hugepd(vma, __hugepd(pgd_val(pgdval)), + address, PGDIR_SHIFT, flags, ctx); + else if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd))) + page = no_page_table(vma, flags, address); + else + page = follow_p4d_mask(vma, address, pgd, flags, ctx); - return follow_p4d_mask(vma, address, pgd, flags, ctx); + return page; } struct page *follow_page(struct vm_area_struct *vma, unsigned long address, @@ -3037,6 +3057,37 @@ static int gup_huge_pd(hugepd_t hugepd, unsigned long addr, return 1; } + +static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hugepd, + unsigned long addr, unsigned int pdshift, + unsigned int flags, + struct follow_page_context *ctx) +{ + struct page *page; + struct hstate *h; + spinlock_t *ptl; + int nr = 0, ret; + pte_t *ptep; + + /* Only hugetlb supports hugepd */ + if (WARN_ON_ONCE(!is_vm_hugetlb_page(vma))) + return ERR_PTR(-EFAULT); + + h = hstate_vma(vma); + ptep = hugepte_offset(hugepd, addr, pdshift); + ptl = huge_pte_lock(h, vma->vm_mm, ptep); + ret = gup_huge_pd(hugepd, addr, pdshift, addr + PAGE_SIZE, + flags, &page, &nr); + spin_unlock(ptl); + + if (ret) { + WARN_ON_ONCE(nr != 1); + ctx->page_mask = (1U << huge_page_order(h)) - 1; + return page; + } + + return NULL; +} #else static inline int gup_huge_pd(hugepd_t hugepd, unsigned long addr, unsigned int pdshift, unsigned long end, unsigned int flags, @@ -3044,6 +3095,14 @@ static inline int gup_huge_pd(hugepd_t hugepd, unsigned long addr, { return 0; } + +static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hugepd, + unsigned long addr, unsigned int pdshift, + unsigned int flags, + struct follow_page_context *ctx) +{ + return NULL; +} #endif /* CONFIG_ARCH_HAS_HUGEPD */ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr,