diff mbox series

[v1,6/6] mm/hugetlb: use folio->lru int demote_free_hugetlb_folios()

Message ID 20250110182149.746551-7-david@redhat.com (mailing list archive)
State New
Headers show
Series mm/hugetlb: folio and migration cleanups | expand

Commit Message

David Hildenbrand Jan. 10, 2025, 6:21 p.m. UTC
We are demoting hugetlb folios to smaller hugetlb folios; let's avoid
messing with pages where avoidable.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 mm/hugetlb.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

Comments

Matthew Wilcox Jan. 10, 2025, 6:59 p.m. UTC | #1
On Fri, Jan 10, 2025 at 07:21:49PM +0100, David Hildenbrand wrote:
> We are demoting hugetlb folios to smaller hugetlb folios; let's avoid
> messing with pages where avoidable.
> 
> Signed-off-by: David Hildenbrand <david@redhat.com>

Good stuff.  I have questions.

> +++ b/mm/hugetlb.c
> @@ -3822,13 +3822,15 @@ static long demote_free_hugetlb_folios(struct hstate *src, struct hstate *dst,
>  
>  		for (i = 0; i < pages_per_huge_page(src); i += pages_per_huge_page(dst)) {
>  			struct page *page = folio_page(folio, i);
> +			struct folio *new_folio;

I'm usually very against casting from page to folio, but I think it
might be the better option in this case ...

>  			page->mapping = NULL;

because then we could do new_folio->mapping = NULL.

We're going to have to do serious changes to this function anyway to
convert from Ottawa to the New York interpretation, so the cast doesn't
give me the feeling of danger that it would elsewhere.

>  			clear_compound_head(page);
>  			prep_compound_page(page, dst->order);
> +			new_folio = page_folio(page);
>  
> -			init_new_hugetlb_folio(dst, page_folio(page));
> -			list_add(&page->lru, &dst_list);
> +			init_new_hugetlb_folio(dst, new_folio);
> +			list_add(&new_folio->lru, &dst_list);
>  		}
>  	}
>  
> -- 
> 2.47.1
>
David Hildenbrand Jan. 10, 2025, 7:32 p.m. UTC | #2
On 10.01.25 19:59, Matthew Wilcox wrote:
> On Fri, Jan 10, 2025 at 07:21:49PM +0100, David Hildenbrand wrote:
>> We are demoting hugetlb folios to smaller hugetlb folios; let's avoid
>> messing with pages where avoidable.
>>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
> 
> Good stuff.  I have questions.
> 
>> +++ b/mm/hugetlb.c
>> @@ -3822,13 +3822,15 @@ static long demote_free_hugetlb_folios(struct hstate *src, struct hstate *dst,
>>   
>>   		for (i = 0; i < pages_per_huge_page(src); i += pages_per_huge_page(dst)) {
>>   			struct page *page = folio_page(folio, i);
>> +			struct folio *new_folio;
> 
> I'm usually very against casting from page to folio, but I think it
> might be the better option in this case ...
> 
>>   			page->mapping = NULL;
> 
> because then we could do new_folio->mapping = NULL.
> 
> We're going to have to do serious changes to this function anyway to
> convert from Ottawa to the New York interpretation, so the cast doesn't
> give me the feeling of danger that it would elsewhere.

Hm, that makes me wonder if we should do it even more similar like our
other split function (__split_huge_page_tail)?


diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 60617eecb99dd..23fe5654f632c 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3821,14 +3821,18 @@ static long demote_free_hugetlb_folios(struct hstate *src, struct hstate *dst,
                 pgalloc_tag_split(folio, huge_page_order(src), huge_page_order(dst));
  
                 for (i = 0; i < pages_per_huge_page(src); i += pages_per_huge_page(dst)) {
                         struct page *page = folio_page(folio, i);
+                       /* Careful: see __split_huge_page_tail() */
+                       struct folio *new_folio = (struct folio *)page;
  
-                       page->mapping = NULL;
                         clear_compound_head(page);
                         prep_compound_page(page, dst->order);
  
-                       init_new_hugetlb_folio(dst, page_folio(page));
-                       list_add(&page->lru, &dst_list);
+                       new_folio->mapping = NULL;
+                       init_new_hugetlb_folio(dst, new_folio);
+                       list_add(&new_folio->lru, &dst_list);
                 }
         }
  

I was even wondering if we should be using nth_page() instead of folio_page() --
similar to __split_huge_page_tail.

If we'd add sanity checking in current code to folio_page() to verify that i
falls inside the folio, the current code would blow up as we modify the
folio using prep_compound_page().
diff mbox series

Patch

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 60617eecb99dd..e872eff124abb 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3822,13 +3822,15 @@  static long demote_free_hugetlb_folios(struct hstate *src, struct hstate *dst,
 
 		for (i = 0; i < pages_per_huge_page(src); i += pages_per_huge_page(dst)) {
 			struct page *page = folio_page(folio, i);
+			struct folio *new_folio;
 
 			page->mapping = NULL;
 			clear_compound_head(page);
 			prep_compound_page(page, dst->order);
+			new_folio = page_folio(page);
 
-			init_new_hugetlb_folio(dst, page_folio(page));
-			list_add(&page->lru, &dst_list);
+			init_new_hugetlb_folio(dst, new_folio);
+			list_add(&new_folio->lru, &dst_list);
 		}
 	}