diff mbox series

[RFC,20/23] mm: round down folio split requirements

Message ID 20230915183848.1018717-21-kernel@pankajraghav.com (mailing list archive)
State New
Headers show
Series Enable block size > page size in XFS | expand

Commit Message

Pankaj Raghav (Samsung) Sept. 15, 2023, 6:38 p.m. UTC
From: Luis Chamberlain <mcgrof@kernel.org>

When we truncate we always check if we can split a large folio, we do
this by checking the userspace mapped pages match folio_nr_pages() - 1,
but if we are using a filesystem or a block device which has a min order
it must be respected and we should only split rounding down to the
min order page requirements.

Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
---
 mm/huge_memory.c | 14 +++++++++++---
 1 file changed, 11 insertions(+), 3 deletions(-)
diff mbox series

Patch

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index f899b3500419..e608a805c79f 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2617,16 +2617,24 @@  static void __split_huge_page(struct page *page, struct list_head *list,
 bool can_split_folio(struct folio *folio, int *pextra_pins)
 {
 	int extra_pins;
+	unsigned int min_order = 0;
+	unsigned int nrpages;
 
 	/* Additional pins from page cache */
-	if (folio_test_anon(folio))
+	if (folio_test_anon(folio)) {
 		extra_pins = folio_test_swapcache(folio) ?
 				folio_nr_pages(folio) : 0;
-	else
+	} else {
 		extra_pins = folio_nr_pages(folio);
+		if (folio->mapping)
+			min_order = mapping_min_folio_order(folio->mapping);
+	}
+
+	nrpages = 1UL << min_order;
+
 	if (pextra_pins)
 		*pextra_pins = extra_pins;
-	return folio_mapcount(folio) == folio_ref_count(folio) - extra_pins - 1;
+	return folio_mapcount(folio) == folio_ref_count(folio) - extra_pins - nrpages;
 }
 
 /*