mbox series

[RFC,v1,0/2] How HugeTLB handle HWPoison page at truncation

Message ID 20250119180608.2132296-1-jiaqiyan@google.com (mailing list archive)
Headers show
Series How HugeTLB handle HWPoison page at truncation | expand

Message

Jiaqi Yan Jan. 19, 2025, 6:06 p.m. UTC
While I was working on userspace MFR via memfd [1], I spend some time to
understand what current kernel does when a HugeTLB-backing memfd is
truncated. My expectation is, if there is a HWPoison HugeTLB folio
mapped via the memfd to userspace, it will be unmapped right away but
still be kept in page cache [2]; however when the memfd is truncated to
zero or after the memfd is closed, kernel should dissolve the HWPoison
folio in the page cache, and free only the clean raw pages to buddy
allocator, excluding the poisoned raw page.

So I wrote a hugetlb-mfr-base.c selftest and expect
0. say nr_hugepages initially is 64 as system configuration.
1. after MADV_HWPOISON, nr_hugepages should still be 64 as we kept even
   HWPoison huge folio in page cache. free_hugepages should be
   nr_hugepages minus whatever the amount in use.
2. after truncated memfd to zero, nr_hugepages should reduced to 63 as
   kernel dissolved and freed the HWPoison huge folio. free_hugepages
   should also be 63.

However, when testing at the head of mm-stable commit 2877a83e4a0a
("mm/hugetlb: use folio->lru int demote_free_hugetlb_folios()"), I found
although free_hugepages is reduced to 63, nr_hugepages is not reduced
and stay at 64.

Is my expectation outdated? Or is this some kind of bug?

I assume this is a bug and then digged a little bit more. It seems there
are two issues, or two things I don't really understand.

1. During try_memory_failure_hugetlb, we should increased the target
   in-use folio's refcount via get_hwpoison_hugetlb_folio. However,
   until the end of try_memory_failure_hugetlb, this refcout is not put.
   I can make sense of this given we keep in-use huge folio in page
   cache. However, I failed to find the place to put this refcount at
   through remove_inode_hugepages. Is the refcount dec missing? At least
   my testcase suggested yes. In folios_put_refs, I added a dump_page:
   if (!folio_ref_sub_and_test(folio, nr_refs)) {
	  dump_page(&folio-page, "track hwpoison folio's ref");
	  continue;
   }
[ 1069.320976] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x2780000
[ 1069.320978] head: order:18 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
[ 1069.320980] flags: 0x400000000100044(referenced|head|hwpoison|node=0|zone=1)
[ 1069.320982] page_type: f4(hugetlb)
[ 1069.320984] raw: 0400000000100044 ffffffff8760bbc8 ffffffff8760bbc8 0000000000000000
[ 1069.320985] raw: 0000000000000000 0000000000000000 00000001f4000000 0000000000000000
[ 1069.320987] head: 0400000000100044 ffffffff8760bbc8 ffffffff8760bbc8 0000000000000000
[ 1069.320988] head: 0000000000000000 0000000000000000 00000001f4000000 0000000000000000
[ 1069.320990] head: 0400000000000012 ffffdd53de000001 ffffffffffffffff 0000000000000000
[ 1069.320991] head: 0000000000040000 0000000000000000 00000000ffffffff 0000000000000000
[ 1069.320992] page dumped because: track hwpoison folio's ref

2. Even if folio's refcount do drop to zero and we get into
   free_huge_folio, it is not clear to me which part of free_huge_folio
   is handling the case that folio is HWPoison. In my test what I
   observed is that evantually the folio is enqueue_hugetlb_folio()-ed.

I tried to fix both issues with a very immature patch and the
hugetlb-mfr-base.c can pass. The patch shows the two things I think
currently missing.

Want to use this RFC to better understand what behavior I should expect,
and if this is indeed an issue, to discuss fixes. Thanks.

[1] https://lore.kernel.org/linux-mm/20250118231549.1652825-1-jiaqiyan@google.com/T
[2] https://lore.kernel.org/all/20221018200125.848471-1-jthoughton@google.com/T/#u

Jiaqi Yan (2):
  selftest/mm: test HWPoison hugetlb truncation behavior
  mm/hugetlb: immature fix to handle HWPoisoned folio

 mm/hugetlb.c                                  |   6 +
 mm/swap.c                                     |   9 +-
 tools/testing/selftests/mm/Makefile           |   1 +
 tools/testing/selftests/mm/hugetlb-mfr-base.c | 240 ++++++++++++++++++
 4 files changed, 255 insertions(+), 1 deletion(-)
 create mode 100644 tools/testing/selftests/mm/hugetlb-mfr-base.c

Comments

David Hildenbrand Jan. 20, 2025, 10:59 a.m. UTC | #1
On 19.01.25 19:06, Jiaqi Yan wrote:
> While I was working on userspace MFR via memfd [1], I spend some time to
> understand what current kernel does when a HugeTLB-backing memfd is
> truncated. My expectation is, if there is a HWPoison HugeTLB folio
> mapped via the memfd to userspace, it will be unmapped right away but
> still be kept in page cache [2]; however when the memfd is truncated to
> zero or after the memfd is closed, kernel should dissolve the HWPoison
> folio in the page cache, and free only the clean raw pages to buddy
> allocator, excluding the poisoned raw page.
> 
> So I wrote a hugetlb-mfr-base.c selftest and expect
> 0. say nr_hugepages initially is 64 as system configuration.
> 1. after MADV_HWPOISON, nr_hugepages should still be 64 as we kept even
>     HWPoison huge folio in page cache. free_hugepages should be
>     nr_hugepages minus whatever the amount in use.
> 2. after truncated memfd to zero, nr_hugepages should reduced to 63 as
>     kernel dissolved and freed the HWPoison huge folio. free_hugepages
>     should also be 63.
> 
> However, when testing at the head of mm-stable commit 2877a83e4a0a
> ("mm/hugetlb: use folio->lru int demote_free_hugetlb_folios()"), I found
> although free_hugepages is reduced to 63, nr_hugepages is not reduced
> and stay at 64.
> 
> Is my expectation outdated? Or is this some kind of bug?
> 
> I assume this is a bug and then digged a little bit more. It seems there
> are two issues, or two things I don't really understand.
> 
> 1. During try_memory_failure_hugetlb, we should increased the target
>     in-use folio's refcount via get_hwpoison_hugetlb_folio. However,
>     until the end of try_memory_failure_hugetlb, this refcout is not put.
>     I can make sense of this given we keep in-use huge folio in page
>     cache.

Isn't the general rule that hwpoisoned folios have a raised refcount 
such that they won't get freed + reused? At least that's how the buddy 
deals with them, and I suspect also hugetlb?

> [ 1069.320976] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x2780000
> [ 1069.320978] head: order:18 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
> [ 1069.320980] flags: 0x400000000100044(referenced|head|hwpoison|node=0|zone=1)
> [ 1069.320982] page_type: f4(hugetlb)
> [ 1069.320984] raw: 0400000000100044 ffffffff8760bbc8 ffffffff8760bbc8 0000000000000000
> [ 1069.320985] raw: 0000000000000000 0000000000000000 00000001f4000000 0000000000000000
> [ 1069.320987] head: 0400000000100044 ffffffff8760bbc8 ffffffff8760bbc8 0000000000000000
> [ 1069.320988] head: 0000000000000000 0000000000000000 00000001f4000000 0000000000000000
> [ 1069.320990] head: 0400000000000012 ffffdd53de000001 ffffffffffffffff 0000000000000000
> [ 1069.320991] head: 0000000000040000 0000000000000000 00000000ffffffff 0000000000000000
> [ 1069.320992] page dumped because: track hwpoison folio's ref
> 
> 2. Even if folio's refcount do drop to zero and we get into
>     free_huge_folio, it is not clear to me which part of free_huge_folio
>     is handling the case that folio is HWPoison. In my test what I
>     observed is that evantually the folio is enqueue_hugetlb_folio()-ed.

How would we get a refcount of 0 if we assume the raised refcount on a 
hwpoisoned hugetlb folio?

I'm probably missing something: are you saying that you can trigger a 
hwpoisoned hugetlb folio to get reallocated again, in upstream code?
Jiaqi Yan Jan. 21, 2025, 1:21 a.m. UTC | #2
On Mon, Jan 20, 2025 at 2:59 AM David Hildenbrand <david@redhat.com> wrote:
>
> On 19.01.25 19:06, Jiaqi Yan wrote:
> > While I was working on userspace MFR via memfd [1], I spend some time to
> > understand what current kernel does when a HugeTLB-backing memfd is
> > truncated. My expectation is, if there is a HWPoison HugeTLB folio
> > mapped via the memfd to userspace, it will be unmapped right away but
> > still be kept in page cache [2]; however when the memfd is truncated to
> > zero or after the memfd is closed, kernel should dissolve the HWPoison
> > folio in the page cache, and free only the clean raw pages to buddy
> > allocator, excluding the poisoned raw page.
> >
> > So I wrote a hugetlb-mfr-base.c selftest and expect
> > 0. say nr_hugepages initially is 64 as system configuration.
> > 1. after MADV_HWPOISON, nr_hugepages should still be 64 as we kept even
> >     HWPoison huge folio in page cache. free_hugepages should be
> >     nr_hugepages minus whatever the amount in use.
> > 2. after truncated memfd to zero, nr_hugepages should reduced to 63 as
> >     kernel dissolved and freed the HWPoison huge folio. free_hugepages
> >     should also be 63.
> >
> > However, when testing at the head of mm-stable commit 2877a83e4a0a
> > ("mm/hugetlb: use folio->lru int demote_free_hugetlb_folios()"), I found
> > although free_hugepages is reduced to 63, nr_hugepages is not reduced
> > and stay at 64.
> >
> > Is my expectation outdated? Or is this some kind of bug?
> >
> > I assume this is a bug and then digged a little bit more. It seems there
> > are two issues, or two things I don't really understand.
> >
> > 1. During try_memory_failure_hugetlb, we should increased the target
> >     in-use folio's refcount via get_hwpoison_hugetlb_folio. However,
> >     until the end of try_memory_failure_hugetlb, this refcout is not put.
> >     I can make sense of this given we keep in-use huge folio in page
> >     cache.
>
> Isn't the general rule that hwpoisoned folios have a raised refcount
> such that they won't get freed + reused? At least that's how the buddy
> deals with them, and I suspect also hugetlb?

Thanks, David.

I see, so it is expected that the _entire_ huge folio will always have
at least a refcount of 1, even when the folio can become "free".

For *free* huge folio, try_memory_failure_hugetlb dissolves it and
frees the clean pages (a lot) to the buddy allocator. This made me
think the same thing will happen for *in-use* huge folio _eventually_
(i.e. somehow the refcount due to HWPoison can be put). I feel this is
a little bit unfortunate for the clean pages, but if it is what it is,
that's fair as it is not a bug.

>
> > [ 1069.320976] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x2780000
> > [ 1069.320978] head: order:18 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
> > [ 1069.320980] flags: 0x400000000100044(referenced|head|hwpoison|node=0|zone=1)
> > [ 1069.320982] page_type: f4(hugetlb)
> > [ 1069.320984] raw: 0400000000100044 ffffffff8760bbc8 ffffffff8760bbc8 0000000000000000
> > [ 1069.320985] raw: 0000000000000000 0000000000000000 00000001f4000000 0000000000000000
> > [ 1069.320987] head: 0400000000100044 ffffffff8760bbc8 ffffffff8760bbc8 0000000000000000
> > [ 1069.320988] head: 0000000000000000 0000000000000000 00000001f4000000 0000000000000000
> > [ 1069.320990] head: 0400000000000012 ffffdd53de000001 ffffffffffffffff 0000000000000000
> > [ 1069.320991] head: 0000000000040000 0000000000000000 00000000ffffffff 0000000000000000
> > [ 1069.320992] page dumped because: track hwpoison folio's ref
> >
> > 2. Even if folio's refcount do drop to zero and we get into
> >     free_huge_folio, it is not clear to me which part of free_huge_folio
> >     is handling the case that folio is HWPoison. In my test what I
> >     observed is that evantually the folio is enqueue_hugetlb_folio()-ed.
>
> How would we get a refcount of 0 if we assume the raised refcount on a
> hwpoisoned hugetlb folio?
>
> I'm probably missing something: are you saying that you can trigger a
> hwpoisoned hugetlb folio to get reallocated again, in upstream code?

No, I think it is just my misunderstanding. From what you said, the
expectation of HWPoison hugetlb folio is just it won't get reallocated
again, which is true.

My (wrong) expectation is, in addition to the "won't reallocated
again" part, some (large) portion of the huge folio will be freed to
the buddy allocator. On the other hand, is it something worth having /
improving? (1G - some_single_digit * 4KB) seems to be valuable to the
system, though they are all 4K. #1 and #2 above are then what needs to
be done if the improvement is worth chasing.

>
>
> --
> Cheers,
>
> David / dhildenb
>
Jane Chu Jan. 21, 2025, 5 a.m. UTC | #3
On 1/20/2025 5:21 PM, Jiaqi Yan wrote:
> On Mon, Jan 20, 2025 at 2:59 AM David Hildenbrand <david@redhat.com> wrote:
>> On 19.01.25 19:06, Jiaqi Yan wrote:
>>> While I was working on userspace MFR via memfd [1], I spend some time to
>>> understand what current kernel does when a HugeTLB-backing memfd is
>>> truncated. My expectation is, if there is a HWPoison HugeTLB folio
>>> mapped via the memfd to userspace, it will be unmapped right away but
>>> still be kept in page cache [2]; however when the memfd is truncated to
>>> zero or after the memfd is closed, kernel should dissolve the HWPoison
>>> folio in the page cache, and free only the clean raw pages to buddy
>>> allocator, excluding the poisoned raw page.
>>>
>>> So I wrote a hugetlb-mfr-base.c selftest and expect
>>> 0. say nr_hugepages initially is 64 as system configuration.
>>> 1. after MADV_HWPOISON, nr_hugepages should still be 64 as we kept even
>>>      HWPoison huge folio in page cache. free_hugepages should be
>>>      nr_hugepages minus whatever the amount in use.
>>> 2. after truncated memfd to zero, nr_hugepages should reduced to 63 as
>>>      kernel dissolved and freed the HWPoison huge folio. free_hugepages
>>>      should also be 63.
>>>
>>> However, when testing at the head of mm-stable commit 2877a83e4a0a
>>> ("mm/hugetlb: use folio->lru int demote_free_hugetlb_folios()"), I found
>>> although free_hugepages is reduced to 63, nr_hugepages is not reduced
>>> and stay at 64.
>>>
>>> Is my expectation outdated? Or is this some kind of bug?
>>>
>>> I assume this is a bug and then digged a little bit more. It seems there
>>> are two issues, or two things I don't really understand.
>>>
>>> 1. During try_memory_failure_hugetlb, we should increased the target
>>>      in-use folio's refcount via get_hwpoison_hugetlb_folio. However,
>>>      until the end of try_memory_failure_hugetlb, this refcout is not put.
>>>      I can make sense of this given we keep in-use huge folio in page
>>>      cache.
>> Isn't the general rule that hwpoisoned folios have a raised refcount
>> such that they won't get freed + reused? At least that's how the buddy
>> deals with them, and I suspect also hugetlb?
> Thanks, David.
>
> I see, so it is expected that the _entire_ huge folio will always have
> at least a refcount of 1, even when the folio can become "free".
>
> For *free* huge folio, try_memory_failure_hugetlb dissolves it and
> frees the clean pages (a lot) to the buddy allocator. This made me
> think the same thing will happen for *in-use* huge folio _eventually_
> (i.e. somehow the refcount due to HWPoison can be put). I feel this is
> a little bit unfortunate for the clean pages, but if it is what it is,
> that's fair as it is not a bug.

Agreed with David.  For *in use* hugetlb pages, including unused shmget 
pages, hugetlb shouldn't

dissvolve the page, not until an explicit freeing action is taken like 
RMID and echo 0 > nr_hugepages.

-jane

>
>>> [ 1069.320976] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x2780000
>>> [ 1069.320978] head: order:18 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
>>> [ 1069.320980] flags: 0x400000000100044(referenced|head|hwpoison|node=0|zone=1)
>>> [ 1069.320982] page_type: f4(hugetlb)
>>> [ 1069.320984] raw: 0400000000100044 ffffffff8760bbc8 ffffffff8760bbc8 0000000000000000
>>> [ 1069.320985] raw: 0000000000000000 0000000000000000 00000001f4000000 0000000000000000
>>> [ 1069.320987] head: 0400000000100044 ffffffff8760bbc8 ffffffff8760bbc8 0000000000000000
>>> [ 1069.320988] head: 0000000000000000 0000000000000000 00000001f4000000 0000000000000000
>>> [ 1069.320990] head: 0400000000000012 ffffdd53de000001 ffffffffffffffff 0000000000000000
>>> [ 1069.320991] head: 0000000000040000 0000000000000000 00000000ffffffff 0000000000000000
>>> [ 1069.320992] page dumped because: track hwpoison folio's ref
>>>
>>> 2. Even if folio's refcount do drop to zero and we get into
>>>      free_huge_folio, it is not clear to me which part of free_huge_folio
>>>      is handling the case that folio is HWPoison. In my test what I
>>>      observed is that evantually the folio is enqueue_hugetlb_folio()-ed.
>> How would we get a refcount of 0 if we assume the raised refcount on a
>> hwpoisoned hugetlb folio?
>>
>> I'm probably missing something: are you saying that you can trigger a
>> hwpoisoned hugetlb folio to get reallocated again, in upstream code?
> No, I think it is just my misunderstanding. From what you said, the
> expectation of HWPoison hugetlb folio is just it won't get reallocated
> again, which is true.
>
> My (wrong) expectation is, in addition to the "won't reallocated
> again" part, some (large) portion of the huge folio will be freed to
> the buddy allocator. On the other hand, is it something worth having /
> improving? (1G - some_single_digit * 4KB) seems to be valuable to the
> system, though they are all 4K. #1 and #2 above are then what needs to
> be done if the improvement is worth chasing.
>
>>
>> --
>> Cheers,
>>
>> David / dhildenb
>>
Jiaqi Yan Jan. 21, 2025, 5:08 a.m. UTC | #4
On Mon, Jan 20, 2025 at 9:01 PM <jane.chu@oracle.com> wrote:
>
>
> On 1/20/2025 5:21 PM, Jiaqi Yan wrote:
> > On Mon, Jan 20, 2025 at 2:59 AM David Hildenbrand <david@redhat.com> wrote:
> >> On 19.01.25 19:06, Jiaqi Yan wrote:
> >>> While I was working on userspace MFR via memfd [1], I spend some time to
> >>> understand what current kernel does when a HugeTLB-backing memfd is
> >>> truncated. My expectation is, if there is a HWPoison HugeTLB folio
> >>> mapped via the memfd to userspace, it will be unmapped right away but
> >>> still be kept in page cache [2]; however when the memfd is truncated to
> >>> zero or after the memfd is closed, kernel should dissolve the HWPoison
> >>> folio in the page cache, and free only the clean raw pages to buddy
> >>> allocator, excluding the poisoned raw page.
> >>>
> >>> So I wrote a hugetlb-mfr-base.c selftest and expect
> >>> 0. say nr_hugepages initially is 64 as system configuration.
> >>> 1. after MADV_HWPOISON, nr_hugepages should still be 64 as we kept even
> >>>      HWPoison huge folio in page cache. free_hugepages should be
> >>>      nr_hugepages minus whatever the amount in use.
> >>> 2. after truncated memfd to zero, nr_hugepages should reduced to 63 as
> >>>      kernel dissolved and freed the HWPoison huge folio. free_hugepages
> >>>      should also be 63.
> >>>
> >>> However, when testing at the head of mm-stable commit 2877a83e4a0a
> >>> ("mm/hugetlb: use folio->lru int demote_free_hugetlb_folios()"), I found
> >>> although free_hugepages is reduced to 63, nr_hugepages is not reduced
> >>> and stay at 64.
> >>>
> >>> Is my expectation outdated? Or is this some kind of bug?
> >>>
> >>> I assume this is a bug and then digged a little bit more. It seems there
> >>> are two issues, or two things I don't really understand.
> >>>
> >>> 1. During try_memory_failure_hugetlb, we should increased the target
> >>>      in-use folio's refcount via get_hwpoison_hugetlb_folio. However,
> >>>      until the end of try_memory_failure_hugetlb, this refcout is not put.
> >>>      I can make sense of this given we keep in-use huge folio in page
> >>>      cache.
> >> Isn't the general rule that hwpoisoned folios have a raised refcount
> >> such that they won't get freed + reused? At least that's how the buddy
> >> deals with them, and I suspect also hugetlb?
> > Thanks, David.
> >
> > I see, so it is expected that the _entire_ huge folio will always have
> > at least a refcount of 1, even when the folio can become "free".
> >
> > For *free* huge folio, try_memory_failure_hugetlb dissolves it and
> > frees the clean pages (a lot) to the buddy allocator. This made me
> > think the same thing will happen for *in-use* huge folio _eventually_
> > (i.e. somehow the refcount due to HWPoison can be put). I feel this is
> > a little bit unfortunate for the clean pages, but if it is what it is,
> > that's fair as it is not a bug.
>
> Agreed with David.  For *in use* hugetlb pages, including unused shmget
> pages, hugetlb shouldn't dissvolve the page, not until an explicit freeing action is taken like
> RMID and echo 0 > nr_hugepages.

To clarify myself, I am not asking memory-failure.c to dissolve the
hugepage at the time it is in-use, but rather when it becomes free
(truncated or process exited).

>
> -jane
>
> >
> >>> [ 1069.320976] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x2780000
> >>> [ 1069.320978] head: order:18 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
> >>> [ 1069.320980] flags: 0x400000000100044(referenced|head|hwpoison|node=0|zone=1)
> >>> [ 1069.320982] page_type: f4(hugetlb)
> >>> [ 1069.320984] raw: 0400000000100044 ffffffff8760bbc8 ffffffff8760bbc8 0000000000000000
> >>> [ 1069.320985] raw: 0000000000000000 0000000000000000 00000001f4000000 0000000000000000
> >>> [ 1069.320987] head: 0400000000100044 ffffffff8760bbc8 ffffffff8760bbc8 0000000000000000
> >>> [ 1069.320988] head: 0000000000000000 0000000000000000 00000001f4000000 0000000000000000
> >>> [ 1069.320990] head: 0400000000000012 ffffdd53de000001 ffffffffffffffff 0000000000000000
> >>> [ 1069.320991] head: 0000000000040000 0000000000000000 00000000ffffffff 0000000000000000
> >>> [ 1069.320992] page dumped because: track hwpoison folio's ref
> >>>
> >>> 2. Even if folio's refcount do drop to zero and we get into
> >>>      free_huge_folio, it is not clear to me which part of free_huge_folio
> >>>      is handling the case that folio is HWPoison. In my test what I
> >>>      observed is that evantually the folio is enqueue_hugetlb_folio()-ed.
> >> How would we get a refcount of 0 if we assume the raised refcount on a
> >> hwpoisoned hugetlb folio?
> >>
> >> I'm probably missing something: are you saying that you can trigger a
> >> hwpoisoned hugetlb folio to get reallocated again, in upstream code?
> > No, I think it is just my misunderstanding. From what you said, the
> > expectation of HWPoison hugetlb folio is just it won't get reallocated
> > again, which is true.
> >
> > My (wrong) expectation is, in addition to the "won't reallocated
> > again" part, some (large) portion of the huge folio will be freed to
> > the buddy allocator. On the other hand, is it something worth having /
> > improving? (1G - some_single_digit * 4KB) seems to be valuable to the
> > system, though they are all 4K. #1 and #2 above are then what needs to
> > be done if the improvement is worth chasing.
> >
> >>
> >> --
> >> Cheers,
> >>
> >> David / dhildenb
> >>
Jane Chu Jan. 21, 2025, 5:22 a.m. UTC | #5
On 1/20/2025 9:08 PM, Jiaqi Yan wrote:
> On Mon, Jan 20, 2025 at 9:01 PM <jane.chu@oracle.com> wrote:
>>
>> On 1/20/2025 5:21 PM, Jiaqi Yan wrote:
>>> On Mon, Jan 20, 2025 at 2:59 AM David Hildenbrand <david@redhat.com> wrote:
>>>> On 19.01.25 19:06, Jiaqi Yan wrote:
>>>>> While I was working on userspace MFR via memfd [1], I spend some time to
>>>>> understand what current kernel does when a HugeTLB-backing memfd is
>>>>> truncated. My expectation is, if there is a HWPoison HugeTLB folio
>>>>> mapped via the memfd to userspace, it will be unmapped right away but
>>>>> still be kept in page cache [2]; however when the memfd is truncated to
>>>>> zero or after the memfd is closed, kernel should dissolve the HWPoison
>>>>> folio in the page cache, and free only the clean raw pages to buddy
>>>>> allocator, excluding the poisoned raw page.
>>>>>
>>>>> So I wrote a hugetlb-mfr-base.c selftest and expect
>>>>> 0. say nr_hugepages initially is 64 as system configuration.
>>>>> 1. after MADV_HWPOISON, nr_hugepages should still be 64 as we kept even
>>>>>       HWPoison huge folio in page cache. free_hugepages should be
>>>>>       nr_hugepages minus whatever the amount in use.
>>>>> 2. after truncated memfd to zero, nr_hugepages should reduced to 63 as
>>>>>       kernel dissolved and freed the HWPoison huge folio. free_hugepages
>>>>>       should also be 63.
>>>>>
>>>>> However, when testing at the head of mm-stable commit 2877a83e4a0a
>>>>> ("mm/hugetlb: use folio->lru int demote_free_hugetlb_folios()"), I found
>>>>> although free_hugepages is reduced to 63, nr_hugepages is not reduced
>>>>> and stay at 64.
>>>>>
>>>>> Is my expectation outdated? Or is this some kind of bug?
>>>>>
>>>>> I assume this is a bug and then digged a little bit more. It seems there
>>>>> are two issues, or two things I don't really understand.
>>>>>
>>>>> 1. During try_memory_failure_hugetlb, we should increased the target
>>>>>       in-use folio's refcount via get_hwpoison_hugetlb_folio. However,
>>>>>       until the end of try_memory_failure_hugetlb, this refcout is not put.
>>>>>       I can make sense of this given we keep in-use huge folio in page
>>>>>       cache.
>>>> Isn't the general rule that hwpoisoned folios have a raised refcount
>>>> such that they won't get freed + reused? At least that's how the buddy
>>>> deals with them, and I suspect also hugetlb?
>>> Thanks, David.
>>>
>>> I see, so it is expected that the _entire_ huge folio will always have
>>> at least a refcount of 1, even when the folio can become "free".
>>>
>>> For *free* huge folio, try_memory_failure_hugetlb dissolves it and
>>> frees the clean pages (a lot) to the buddy allocator. This made me
>>> think the same thing will happen for *in-use* huge folio _eventually_
>>> (i.e. somehow the refcount due to HWPoison can be put). I feel this is
>>> a little bit unfortunate for the clean pages, but if it is what it is,
>>> that's fair as it is not a bug.
>> Agreed with David.  For *in use* hugetlb pages, including unused shmget
>> pages, hugetlb shouldn't dissvolve the page, not until an explicit freeing action is taken like
>> RMID and echo 0 > nr_hugepages.
> To clarify myself, I am not asking memory-failure.c to dissolve the
> hugepage at the time it is in-use, but rather when it becomes free
> (truncated or process exited).

Understood, a free hugetlb page in the pool should have refcount 1 though.

-jane

>
>> -jane
>>
>>>>> [ 1069.320976] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x2780000
>>>>> [ 1069.320978] head: order:18 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
>>>>> [ 1069.320980] flags: 0x400000000100044(referenced|head|hwpoison|node=0|zone=1)
>>>>> [ 1069.320982] page_type: f4(hugetlb)
>>>>> [ 1069.320984] raw: 0400000000100044 ffffffff8760bbc8 ffffffff8760bbc8 0000000000000000
>>>>> [ 1069.320985] raw: 0000000000000000 0000000000000000 00000001f4000000 0000000000000000
>>>>> [ 1069.320987] head: 0400000000100044 ffffffff8760bbc8 ffffffff8760bbc8 0000000000000000
>>>>> [ 1069.320988] head: 0000000000000000 0000000000000000 00000001f4000000 0000000000000000
>>>>> [ 1069.320990] head: 0400000000000012 ffffdd53de000001 ffffffffffffffff 0000000000000000
>>>>> [ 1069.320991] head: 0000000000040000 0000000000000000 00000000ffffffff 0000000000000000
>>>>> [ 1069.320992] page dumped because: track hwpoison folio's ref
>>>>>
>>>>> 2. Even if folio's refcount do drop to zero and we get into
>>>>>       free_huge_folio, it is not clear to me which part of free_huge_folio
>>>>>       is handling the case that folio is HWPoison. In my test what I
>>>>>       observed is that evantually the folio is enqueue_hugetlb_folio()-ed.
>>>> How would we get a refcount of 0 if we assume the raised refcount on a
>>>> hwpoisoned hugetlb folio?
>>>>
>>>> I'm probably missing something: are you saying that you can trigger a
>>>> hwpoisoned hugetlb folio to get reallocated again, in upstream code?
>>> No, I think it is just my misunderstanding. From what you said, the
>>> expectation of HWPoison hugetlb folio is just it won't get reallocated
>>> again, which is true.
>>>
>>> My (wrong) expectation is, in addition to the "won't reallocated
>>> again" part, some (large) portion of the huge folio will be freed to
>>> the buddy allocator. On the other hand, is it something worth having /
>>> improving? (1G - some_single_digit * 4KB) seems to be valuable to the
>>> system, though they are all 4K. #1 and #2 above are then what needs to
>>> be done if the improvement is worth chasing.
>>>
>>>> --
>>>> Cheers,
>>>>
>>>> David / dhildenb
>>>>
David Hildenbrand Jan. 21, 2025, 8:02 a.m. UTC | #6
On 21.01.25 02:21, Jiaqi Yan wrote:
> On Mon, Jan 20, 2025 at 2:59 AM David Hildenbrand <david@redhat.com> wrote:
>>
>> On 19.01.25 19:06, Jiaqi Yan wrote:
>>> While I was working on userspace MFR via memfd [1], I spend some time to
>>> understand what current kernel does when a HugeTLB-backing memfd is
>>> truncated. My expectation is, if there is a HWPoison HugeTLB folio
>>> mapped via the memfd to userspace, it will be unmapped right away but
>>> still be kept in page cache [2]; however when the memfd is truncated to
>>> zero or after the memfd is closed, kernel should dissolve the HWPoison
>>> folio in the page cache, and free only the clean raw pages to buddy
>>> allocator, excluding the poisoned raw page.
>>>
>>> So I wrote a hugetlb-mfr-base.c selftest and expect
>>> 0. say nr_hugepages initially is 64 as system configuration.
>>> 1. after MADV_HWPOISON, nr_hugepages should still be 64 as we kept even
>>>      HWPoison huge folio in page cache. free_hugepages should be
>>>      nr_hugepages minus whatever the amount in use.
>>> 2. after truncated memfd to zero, nr_hugepages should reduced to 63 as
>>>      kernel dissolved and freed the HWPoison huge folio. free_hugepages
>>>      should also be 63.
>>>
>>> However, when testing at the head of mm-stable commit 2877a83e4a0a
>>> ("mm/hugetlb: use folio->lru int demote_free_hugetlb_folios()"), I found
>>> although free_hugepages is reduced to 63, nr_hugepages is not reduced
>>> and stay at 64.
>>>
>>> Is my expectation outdated? Or is this some kind of bug?
>>>
>>> I assume this is a bug and then digged a little bit more. It seems there
>>> are two issues, or two things I don't really understand.
>>>
>>> 1. During try_memory_failure_hugetlb, we should increased the target
>>>      in-use folio's refcount via get_hwpoison_hugetlb_folio. However,
>>>      until the end of try_memory_failure_hugetlb, this refcout is not put.
>>>      I can make sense of this given we keep in-use huge folio in page
>>>      cache.
>>
>> Isn't the general rule that hwpoisoned folios have a raised refcount
>> such that they won't get freed + reused? At least that's how the buddy
>> deals with them, and I suspect also hugetlb?
> 
> Thanks, David.
> 
> I see, so it is expected that the _entire_ huge folio will always have
> at least a refcount of 1, even when the folio can become "free".
> 
> For *free* huge folio, try_memory_failure_hugetlb dissolves it and
> frees the clean pages (a lot) to the buddy allocator. This made me
> think the same thing will happen for *in-use* huge folio _eventually_
> (i.e. somehow the refcount due to HWPoison can be put). I feel this is
> a little bit unfortunate for the clean pages, but if it is what it is,
> that's fair as it is not a bug.

Yes, that's my understanding. Free pages are a lot easier to handle 
because we can just reliably dissolve and free them. For in-unse, it's a 
lot more tricky.

Similar to ordinary free buddy vs. allocated pages.

> 
>>
>>> [ 1069.320976] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x2780000
>>> [ 1069.320978] head: order:18 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
>>> [ 1069.320980] flags: 0x400000000100044(referenced|head|hwpoison|node=0|zone=1)
>>> [ 1069.320982] page_type: f4(hugetlb)
>>> [ 1069.320984] raw: 0400000000100044 ffffffff8760bbc8 ffffffff8760bbc8 0000000000000000
>>> [ 1069.320985] raw: 0000000000000000 0000000000000000 00000001f4000000 0000000000000000
>>> [ 1069.320987] head: 0400000000100044 ffffffff8760bbc8 ffffffff8760bbc8 0000000000000000
>>> [ 1069.320988] head: 0000000000000000 0000000000000000 00000001f4000000 0000000000000000
>>> [ 1069.320990] head: 0400000000000012 ffffdd53de000001 ffffffffffffffff 0000000000000000
>>> [ 1069.320991] head: 0000000000040000 0000000000000000 00000000ffffffff 0000000000000000
>>> [ 1069.320992] page dumped because: track hwpoison folio's ref
>>>
>>> 2. Even if folio's refcount do drop to zero and we get into
>>>      free_huge_folio, it is not clear to me which part of free_huge_folio
>>>      is handling the case that folio is HWPoison. In my test what I
>>>      observed is that evantually the folio is enqueue_hugetlb_folio()-ed.
>>
>> How would we get a refcount of 0 if we assume the raised refcount on a
>> hwpoisoned hugetlb folio?
>>
>> I'm probably missing something: are you saying that you can trigger a
>> hwpoisoned hugetlb folio to get reallocated again, in upstream code?
> 
> No, I think it is just my misunderstanding. From what you said, the
> expectation of HWPoison hugetlb folio is just it won't get reallocated
> again, which is true.

Right.

> 
> My (wrong) expectation is, in addition to the "won't reallocated
> again" part, some (large) portion of the huge folio will be freed to
> the buddy allocator. On the other hand, is it something worth having /
> improving? (1G - some_single_digit * 4KB) seems to be valuable to the
> system, though they are all 4K. #1 and #2 above are then what needs to
> be done if the improvement is worth chasing.

I think one challenge is making sure that the page won't accidentally 
get reallocated again -- and in contrast to free hugetlb pages we cannot 
handle this split synchronously.

I recall that we might not remember the exact page part of a hugetlb 
folio that was poisoned (HVO optimization), but maybe we changed that in 
the meantime,

Likely it would be worth having, but probably not very easy to implement.