From patchwork Fri Apr 14 13:02:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13211476 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7CC6BC77B6E for ; Fri, 14 Apr 2023 13:04:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=uHEgDlSvnvxHmHw+YNwR035Flem0G1VEGsx3SCxIsk8=; b=xUVv/N3urNLKqT Cgftmzj+CGQ4Chcg0NUl8PfBa+CHfd7N3d0VCAt75mlbHb+cBRxBwCvD8ckJkmUnqrgeL2q/xMZbN LZtuwNNXGVIulQDqffArBAHDDO8FtjK4I0LyXBazB6cy7do4HctuD95XNJe4TdzB1RzrdRd8xdiG6 hYv26OgyELf72KAXM1ANJicPVsmnxph+vfBmWkNgYZ6j2+kwVpCMk4i0kqIpP3zd8skJPVbIVvIIc bSxPRGGom3EXP6dhM7PZoeGN3WmbZUxDV0SJkFr8LAElN1o3IxZdT4t0oS8JHllzVHlU4ElwQNuXi Egjniuyo9h+0e6ZNF5dw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pnJ5m-009bdW-32; Fri, 14 Apr 2023 13:03:55 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pnJ5N-009bQw-2q for linux-arm-kernel@lists.infradead.org; Fri, 14 Apr 2023 13:03:37 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 47F6A1762; Fri, 14 Apr 2023 06:04:13 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C33FF3F6C4; Fri, 14 Apr 2023 06:03:27 -0700 (PDT) From: Ryan Roberts To: Andrew Morton , "Matthew Wilcox (Oracle)" , Yu Zhao , "Yin, Fengwei" Cc: Ryan Roberts , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org Subject: [RFC v2 PATCH 08/17] mm: Implement folio_move_anon_rmap_range() Date: Fri, 14 Apr 2023 14:02:54 +0100 Message-Id: <20230414130303.2345383-9-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230414130303.2345383-1-ryan.roberts@arm.com> References: <20230414130303.2345383-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230414_060330_065736_8C849BCB X-CRM114-Status: GOOD ( 14.34 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Similar to page_move_anon_rmap() except it can batch-move a range of pages within a folio for increased efficiency. Will be used to enable reusing multiple pages from a large anonymous folio in one go. Signed-off-by: Ryan Roberts --- include/linux/rmap.h | 2 ++ mm/rmap.c | 40 ++++++++++++++++++++++++++++++---------- 2 files changed, 32 insertions(+), 10 deletions(-) -- 2.25.1 diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 5c707f53d7b5..8cb0ba48d58f 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -190,6 +190,8 @@ typedef int __bitwise rmap_t; * rmap interfaces called when adding or removing pte of page */ void page_move_anon_rmap(struct page *, struct vm_area_struct *); +void folio_move_anon_rmap_range(struct folio *folio, struct page *page, + int nr, struct vm_area_struct *vma); void page_add_anon_rmap(struct page *, struct vm_area_struct *, unsigned long address, rmap_t flags); void page_add_new_anon_rmap(struct page *, struct vm_area_struct *, diff --git a/mm/rmap.c b/mm/rmap.c index 5148a484f915..1cd8fb0b929f 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1103,19 +1103,22 @@ int folio_total_mapcount(struct folio *folio) } /** - * page_move_anon_rmap - move a page to our anon_vma - * @page: the page to move to our anon_vma - * @vma: the vma the page belongs to + * folio_move_anon_rmap_range - batch-move a range of pages within a folio to + * our anon_vma; a more efficient version of page_move_anon_rmap(). + * @folio: folio that owns the range of pages + * @page: the first page to move to our anon_vma + * @nr: number of pages to move to our anon_vma + * @vma: the vma the page belongs to * - * When a page belongs exclusively to one process after a COW event, - * that page can be moved into the anon_vma that belongs to just that - * process, so the rmap code will not search the parent or sibling - * processes. + * When a range of pages belongs exclusively to one process after a COW event, + * those pages can be moved into the anon_vma that belongs to just that process, + * so the rmap code will not search the parent or sibling processes. */ -void page_move_anon_rmap(struct page *page, struct vm_area_struct *vma) +void folio_move_anon_rmap_range(struct folio *folio, struct page *page, + int nr, struct vm_area_struct *vma) { void *anon_vma = vma->anon_vma; - struct folio *folio = page_folio(page); + int i; VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); VM_BUG_ON_VMA(!anon_vma, vma); @@ -1127,7 +1130,24 @@ void page_move_anon_rmap(struct page *page, struct vm_area_struct *vma) * folio_test_anon()) will not see one without the other. */ WRITE_ONCE(folio->mapping, anon_vma); - SetPageAnonExclusive(page); + + for (i = 0; i < nr; i++) + SetPageAnonExclusive(page++); +} + +/** + * page_move_anon_rmap - move a page to our anon_vma + * @page: the page to move to our anon_vma + * @vma: the vma the page belongs to + * + * When a page belongs exclusively to one process after a COW event, + * that page can be moved into the anon_vma that belongs to just that + * process, so the rmap code will not search the parent or sibling + * processes. + */ +void page_move_anon_rmap(struct page *page, struct vm_area_struct *vma) +{ + folio_move_anon_rmap_range(page_folio(page), page, 1, vma); } /**