diff mbox series

[1/3] mm: Opencode split_page_memcg() in __split_huge_page()

Message ID 20241104210602.374975-2-willy@infradead.org (mailing list archive)
State New
Headers show
Series Introduce acctmem | expand

Commit Message

Matthew Wilcox Nov. 4, 2024, 9:05 p.m. UTC
This is in preparation for only handling kmem pages in
__split_huge_page().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/huge_memory.c | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

Comments

David Hildenbrand Nov. 5, 2024, 5:30 p.m. UTC | #1
On 04.11.24 22:05, Matthew Wilcox (Oracle) wrote:
> This is in preparation for only handling kmem pages in
> __split_huge_page().

Did you mean "in split_page_memcg()"?

> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>   mm/huge_memory.c | 11 +++++++++--
>   1 file changed, 9 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index f92068864469..44d25a74b611 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3234,6 +3234,10 @@ static void __split_huge_page_tail(struct folio *folio, int tail,
>   		folio_set_large_rmappable(new_folio);
>   	}
>   
> +#ifdef CONFIG_MEMCG
> +	new_folio->memcg_data = folio->memcg_data;
 > +#endif> +
>   	/* Finally unfreeze refcount. Additional reference from page cache. */
>   	page_ref_unfreeze(page_tail,
>   		1 + ((!folio_test_anon(folio) || folio_test_swapcache(folio)) ?
> @@ -3267,8 +3271,11 @@ static void __split_huge_page(struct page *page, struct list_head *list,
>   	int order = folio_order(folio);
>   	unsigned int nr = 1 << order;
>   
> -	/* complete memcg works before add pages to LRU */
> -	split_page_memcg(head, order, new_order);
> +#ifdef CONFIG_MEMCG
> +	if (folio_memcg_charged(folio))

Do we have the mem_cgroup_disabled() call here?

Apart from that LGTM.
diff mbox series

Patch

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index f92068864469..44d25a74b611 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3234,6 +3234,10 @@  static void __split_huge_page_tail(struct folio *folio, int tail,
 		folio_set_large_rmappable(new_folio);
 	}
 
+#ifdef CONFIG_MEMCG
+	new_folio->memcg_data = folio->memcg_data;
+#endif
+
 	/* Finally unfreeze refcount. Additional reference from page cache. */
 	page_ref_unfreeze(page_tail,
 		1 + ((!folio_test_anon(folio) || folio_test_swapcache(folio)) ?
@@ -3267,8 +3271,11 @@  static void __split_huge_page(struct page *page, struct list_head *list,
 	int order = folio_order(folio);
 	unsigned int nr = 1 << order;
 
-	/* complete memcg works before add pages to LRU */
-	split_page_memcg(head, order, new_order);
+#ifdef CONFIG_MEMCG
+	if (folio_memcg_charged(folio))
+		css_get_many(&folio_memcg(folio)->css,
+				(1 << (order - new_order)) - 1);
+#endif
 
 	if (folio_test_anon(folio) && folio_test_swapcache(folio)) {
 		offset = swap_cache_index(folio->swap);