From patchwork Thu Mar 21 22:07:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13599427 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6BF35C6FD1F for ; Thu, 21 Mar 2024 22:10:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=mjwrf7mA2q0r7UF8raW3xTFaaBtdgzCZl12WSdh4WKo=; b=g7bNlmxsjj9jDX zSP/TPEGS1QCU9COUUlW86rkAAOxnm4pYKdskX5jAbNoWYNhp6bSkn17yEcyI2lcEAIHvmGlr4CKy 0FOYjYof8tt+a7z9CFdEk0fQgMreg5uNvp7nG+WEM0cm4It+q7cwfAsTVONb5cYBh1YrM4Q+/ZBzV W/IkzTtrHYbidwG+eJyJ9Dwe8TwmYEYfpC1OBaVddSJ6M/Q0qn1f7KHlq22qEE0P4t436j3UQJjeg Jqe9cWn4XuJdSx1D06l/a3xZOOr7vwHHsQUFu/tHHNkSvBrCxAP4rN23UaSU/ky7WzGepv8BaMtOR vuGc5NtFvmJRSyPtboBQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rnQbg-00000004rQU-2Zv9; Thu, 21 Mar 2024 22:09:52 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rnQaB-00000004qOC-45CP for linux-arm-kernel@lists.infradead.org; Thu, 21 Mar 2024 22:08:23 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711058899; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dI3BkO2nCtQEnBDt3PLoa+NL/Gig0J3ZsiByK6YiGEU=; b=Gq0YrZaZJDba3/oiLHQ5k0QmoQm24cQ+2uRUbyM7sQ1AQDcS7Dy2L/3pkR7wVjlqpxGD3z mcAbI1PWYLUrjrMR4xJKymVdTk2i//qVxoi6PSNXD+zK4MGzMyJGLIS+lin8FeDhv0Hmcl 3dxdDIBhC+tRORZawc/LBLVRrZXzBG8= Received: from mail-qk1-f197.google.com (mail-qk1-f197.google.com [209.85.222.197]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-447-SZiH64wNPdSE8fRiHlRCzw-1; Thu, 21 Mar 2024 18:08:18 -0400 X-MC-Unique: SZiH64wNPdSE8fRiHlRCzw-1 Received: by mail-qk1-f197.google.com with SMTP id af79cd13be357-78a0e8a60f8so22227685a.0 for ; Thu, 21 Mar 2024 15:08:17 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711058897; x=1711663697; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=dI3BkO2nCtQEnBDt3PLoa+NL/Gig0J3ZsiByK6YiGEU=; b=DBxdCQ/dNsu5Uag9YGE5W8uMe1e0j7aig3XBypRqon6UNgCJ+y/G9yxtRuYj1x+2Gn FvLjPfC0U6V7KtqZhiLmVXud7na4YUAP0wcU5a1r36WjwON9VV6ouhJconOPUq6J9e52 AxfwNgJzTSFtgBkeRz0XSwViD6Io0hj/55gux17RcZ6/pnTsoZyHH3pwmo3SuQImEa8S H0fqJ6Gq2/6vxjPblM7PL4fibJjTSmYrDO/g8gKHitvA2CaYZ6sXwC0uxKTKTyeeDlyL QXOO3jhJqWb4pqCsExDJOWecumbGKa2YulgG21bfnwmsWrLuNyszwZT/9yU1xPYaxDf6 e/eA== X-Forwarded-Encrypted: i=1; AJvYcCWp8iFARQCpw0Ucc4JuZE52EVxqBUyKpke0rv6n8O7q6yk4vx7lPbmW+pKUj3Hrbrwd1MoP/pde1PgM/cG1ypdQ1/ZQZ1XR8leeTlY1jAc+c5O70/o= X-Gm-Message-State: AOJu0YwpE7yZv5kKTjBVJkrC4lwchuPRqyepoLNyMg8guo6pmEBO+yLv wqhYltx7a+XXuThYEWvrp6ic4rVjbJIvZe2abHBJikOAv7dM8wJBr+7gDpEGlf+SnESr19Gx/pZ Q1VO2nYfSOy0iXkl7R7rfRCKe3aFyX9FDnsq7ZEo55njmx953ZTTxrpR4n+BpLm02W8P/UJ88 X-Received: by 2002:a05:6214:4489:b0:68f:e779:70f2 with SMTP id on9-20020a056214448900b0068fe77970f2mr443528qvb.3.1711058897149; Thu, 21 Mar 2024 15:08:17 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHGuIUGmtGDilXkO+Txm1IkzpifWGQBAYBHvb3QP196cSPWL36y2snARbq2rVPrLWqoQw6FQA== X-Received: by 2002:a05:6214:4489:b0:68f:e779:70f2 with SMTP id on9-20020a056214448900b0068fe77970f2mr443515qvb.3.1711058896774; Thu, 21 Mar 2024 15:08:16 -0700 (PDT) Received: from x1n.redhat.com ([99.254.121.117]) by smtp.gmail.com with ESMTPSA id o6-20020a0562140e4600b00690baf5cde9sm351663qvc.118.2024.03.21.15.08.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Mar 2024 15:08:16 -0700 (PDT) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: linuxppc-dev@lists.ozlabs.org, Michael Ellerman , Christophe Leroy , Matthew Wilcox , Rik van Riel , Lorenzo Stoakes , Axel Rasmussen , peterx@redhat.com, Yang Shi , John Hubbard , linux-arm-kernel@lists.infradead.org, "Kirill A . Shutemov" , Andrew Jones , Vlastimil Babka , Mike Rapoport , Andrew Morton , Muchun Song , Christoph Hellwig , linux-riscv@lists.infradead.org, James Houghton , David Hildenbrand , Jason Gunthorpe , Andrea Arcangeli , "Aneesh Kumar K . V" , Mike Kravetz Subject: [PATCH v3 06/12] mm/gup: Refactor record_subpages() to find 1st small page Date: Thu, 21 Mar 2024 18:07:56 -0400 Message-ID: <20240321220802.679544-7-peterx@redhat.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240321220802.679544-1-peterx@redhat.com> References: <20240321220802.679544-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240321_150821_017036_CDC3AAE2 X-CRM114-Status: GOOD ( 12.52 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Peter Xu All the fast-gup functions take a tail page to operate, always need to do page mask calculations before feeding that into record_subpages(). Merge that logic into record_subpages(), so that it will do the nth_page() calculation. Reviewed-by: Jason Gunthorpe Signed-off-by: Peter Xu --- mm/gup.c | 25 ++++++++++++++----------- 1 file changed, 14 insertions(+), 11 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 9127ec5515ac..f3ae8f6ce8a4 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2778,13 +2778,16 @@ static int __gup_device_huge_pud(pud_t pud, pud_t *pudp, unsigned long addr, } #endif -static int record_subpages(struct page *page, unsigned long addr, - unsigned long end, struct page **pages) +static int record_subpages(struct page *page, unsigned long sz, + unsigned long addr, unsigned long end, + struct page **pages) { + struct page *start_page; int nr; + start_page = nth_page(page, (addr & (sz - 1)) >> PAGE_SHIFT); for (nr = 0; addr != end; nr++, addr += PAGE_SIZE) - pages[nr] = nth_page(page, nr); + pages[nr] = nth_page(start_page, nr); return nr; } @@ -2819,8 +2822,8 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr, /* hugepages are never "special" */ VM_BUG_ON(!pfn_valid(pte_pfn(pte))); - page = nth_page(pte_page(pte), (addr & (sz - 1)) >> PAGE_SHIFT); - refs = record_subpages(page, addr, end, pages + *nr); + page = pte_page(pte); + refs = record_subpages(page, sz, addr, end, pages + *nr); folio = try_grab_folio(page, refs, flags); if (!folio) @@ -2893,8 +2896,8 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, pages, nr); } - page = nth_page(pmd_page(orig), (addr & ~PMD_MASK) >> PAGE_SHIFT); - refs = record_subpages(page, addr, end, pages + *nr); + page = pmd_page(orig); + refs = record_subpages(page, PMD_SIZE, addr, end, pages + *nr); folio = try_grab_folio(page, refs, flags); if (!folio) @@ -2937,8 +2940,8 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr, pages, nr); } - page = nth_page(pud_page(orig), (addr & ~PUD_MASK) >> PAGE_SHIFT); - refs = record_subpages(page, addr, end, pages + *nr); + page = pud_page(orig); + refs = record_subpages(page, PUD_SIZE, addr, end, pages + *nr); folio = try_grab_folio(page, refs, flags); if (!folio) @@ -2977,8 +2980,8 @@ static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp, unsigned long addr, BUILD_BUG_ON(pgd_devmap(orig)); - page = nth_page(pgd_page(orig), (addr & ~PGDIR_MASK) >> PAGE_SHIFT); - refs = record_subpages(page, addr, end, pages + *nr); + page = pgd_page(orig); + refs = record_subpages(page, PGDIR_SIZE, addr, end, pages + *nr); folio = try_grab_folio(page, refs, flags); if (!folio)