From patchwork Fri Apr 14 13:02:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13211479 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0BFAFC77B6E for ; Fri, 14 Apr 2023 13:05:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=huSzatDScDBljxPgiVstFN2THK1Wxa4o3Y1qza1CJas=; b=WLCfPRoOYy9UrO 8/ZBhZQWmjr83GvQFnCcth4k9qwMIWCNn0AVeY/1Z8d5oRXcK2kYwTrcDZf2nsPg1LQjb7HmxHQLS G5Az5cj6ItQtE+zE/oHMyddSk2LXrNNUIJKwbZ315rr0FdxEyEyfJL2MYC+nAI2AOnDr8PUSGKw0n uhf/CLTjZPuEliKcg9220fjj4nS9Q/bbwZ9lvIFidiicio/gQEY+EAGyq6LLY47IWjOopaN6M0+Dh gOdLRwQNj3x8SpxR2Zp5Qc126x7nod7hkTxAx74dIK6dLE4tKKFqZKNH6D1FAB+OX5paVsqMbaGXI /v7y4Au0+UV/miUVD/Yg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pnJ5w-009bj7-25; Fri, 14 Apr 2023 13:04:04 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pnJ5Y-009bXw-2K for linux-arm-kernel@bombadil.infradead.org; Fri, 14 Apr 2023 13:03:40 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Transfer-Encoding:MIME-Version :References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=7ZDJ9cjTlxL0uCmbHQ5z7qLc8C1ClBv5+RC+K8b5xO4=; b=DckEsbN69vMtshsD7xV+oawpsf qcPufyqKxPa8F+l0FdN5XVbghxoUuMUDzCrDV1zlEN75tmW5JkUmFsFX9QW989bZvgWvqN07uzlNR CFJdVpzkTMwCRoRVC9EwujyW4h7G51wvMpIb882k1OFKQ+rg0SPJPY8wtItvHQt0197NG8tLH9A8V JVzhxvQoG/vP4q8Z3qHsVmE3HzGYr16mHPGxCv9JTCp4By4o3YczExu86ysdllbuGMxWuM+bqG9iW 6i+7h7W2YfApjZy+q0oKf6ih358YPm3R9iRFfN2KJo7UA0fxt4Y1IN/XJUyVzKwnrdQ1ErA8vGxjT w0skkApQ==; Received: from foss.arm.com ([217.140.110.172]) by desiato.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pnJ5T-00Fa1T-3D for linux-arm-kernel@lists.infradead.org; Fri, 14 Apr 2023 13:03:39 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4E2F02F4; Fri, 14 Apr 2023 06:04:19 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C9E343F6C4; Fri, 14 Apr 2023 06:03:33 -0700 (PDT) From: Ryan Roberts To: Andrew Morton , "Matthew Wilcox (Oracle)" , Yu Zhao , "Yin, Fengwei" Cc: Ryan Roberts , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org Subject: [RFC v2 PATCH 13/17] mm: Implement folio_remove_rmap_range() Date: Fri, 14 Apr 2023 14:02:59 +0100 Message-Id: <20230414130303.2345383-14-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230414130303.2345383-1-ryan.roberts@arm.com> References: <20230414130303.2345383-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230414_140336_405895_800CFDB2 X-CRM114-Status: GOOD ( 15.92 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Like page_remove_rmap() but batch-removes the rmap for a range of pages belonging to a folio, for effciency savings. All pages are accounted as small pages. Signed-off-by: Ryan Roberts --- include/linux/rmap.h | 2 ++ mm/rmap.c | 62 ++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 64 insertions(+) -- 2.25.1 diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 8cb0ba48d58f..7daf25887049 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -204,6 +204,8 @@ void page_add_file_rmap(struct page *, struct vm_area_struct *, bool compound); void page_remove_rmap(struct page *, struct vm_area_struct *, bool compound); +void folio_remove_rmap_range(struct folio *folio, struct page *page, + int nr, struct vm_area_struct *vma); void hugepage_add_anon_rmap(struct page *, struct vm_area_struct *, unsigned long address, rmap_t flags); diff --git a/mm/rmap.c b/mm/rmap.c index 1cd8fb0b929f..954e44054d5c 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1419,6 +1419,68 @@ void page_add_file_rmap(struct page *page, struct vm_area_struct *vma, mlock_vma_folio(folio, vma, compound); } +/** + * folio_remove_rmap_range - take down pte mappings from a range of pages + * belonging to a folio. All pages are accounted as small pages. + * @folio: folio that all pages belong to + * @page: first page in range to remove mapping from + * @nr: number of pages in range to remove mapping from + * @vma: the vm area from which the mapping is removed + * + * The caller needs to hold the pte lock. + */ +void folio_remove_rmap_range(struct folio *folio, struct page *page, + int nr, struct vm_area_struct *vma) +{ + atomic_t *mapped = &folio->_nr_pages_mapped; + int nr_unmapped = 0; + int nr_mapped; + bool last; + enum node_stat_item idx; + + VM_BUG_ON_FOLIO(folio_test_hugetlb(folio), folio); + + if (!folio_test_large(folio)) { + /* Is this the page's last map to be removed? */ + last = atomic_add_negative(-1, &page->_mapcount); + nr_unmapped = last; + } else { + for (; nr != 0; nr--, page++) { + /* Is this the page's last map to be removed? */ + last = atomic_add_negative(-1, &page->_mapcount); + if (last) { + /* Page still mapped if folio mapped entirely */ + nr_mapped = atomic_dec_return_relaxed(mapped); + if (nr_mapped < COMPOUND_MAPPED) + nr_unmapped++; + } + } + } + + if (nr_unmapped) { + idx = folio_test_anon(folio) ? NR_ANON_MAPPED : NR_FILE_MAPPED; + __lruvec_stat_mod_folio(folio, idx, -nr_unmapped); + + /* + * Queue anon THP for deferred split if we have just unmapped at + * least 1 page, while at least 1 page remains mapped. + */ + if (folio_test_large(folio) && folio_test_anon(folio)) + if (nr_mapped) + deferred_split_folio(folio); + } + + /* + * It would be tidy to reset folio_test_anon mapping when fully + * unmapped, but that might overwrite a racing page_add_anon_rmap + * which increments mapcount after us but sets mapping before us: + * so leave the reset to free_pages_prepare, and remember that + * it's only reliable while mapped. + */ + + munlock_vma_folio(folio, vma, false); +} + /** * page_remove_rmap - take down pte mapping from a page * @page: page to remove mapping from