Message ID | 20230714160407.4142030-1-ryan.roberts@arm.com (mailing list archive) |
---|---|
Headers | show |
Series | variable-order, large folios for anonymous memory | expand |
On 14/07/2023 17:04, Ryan Roberts wrote: > Hi All, > > This is v3 of a series to implement variable order, large folios for anonymous > memory. (currently called "FLEXIBLE_THP") The objective of this is to improve > performance by allocating larger chunks of memory during anonymous page faults. > See [1] and [2] for background. A question for anyone that can help; I'm preparing v4 and as part of that am running the mm selftests, now that I've fixed them up to run reliably for arm64. This is showing 2 regressions vs the v6.5-rc3 baseline: 1) khugepaged test fails here: # Run test: collapse_max_ptes_none (khugepaged:anon) # Maybe collapse with max_ptes_none exceeded.... Fail # Unexpected huge page 2) split_huge_page_test fails with: # Still AnonHugePages not split I *think* (but haven't yet verified) that (1) is due to khugepaged ignoring non-order-0 folios when looking for candidates to collapse. Now that we have large anon folios, the memory allocated by the test is in large folios and therefore does not get collapsed. We understand this issue, and I believe DavidH's new scheme for determining exclusive vs shared should give us the tools to solve this. But (2) is weird. If I run this test on its own immediately after booting, it passes. If I then run the khugepaged test, then re-run this test, it fails. The test is allocating 4 hugepages, then requesting they are split using the debugfs interface. Then the test looks at /proc/self/smaps to check that AnonHugePages is back to 0. In both the passing and failing cases, the kernel thinks that it has successfully split the pages; the debug logs in split_huge_pages_pid() confirm this. In the failing case, I wonder if somehow khugepaged could be immediately re-collapsing the pages before user sapce can observe the split? Perhaps the failed khugepaged test has left khugepaged in an "awake" state and it immediately pounces? Thanks, Ryan
On 24 Jul 2023, at 7:59, Ryan Roberts wrote: > On 14/07/2023 17:04, Ryan Roberts wrote: >> Hi All, >> >> This is v3 of a series to implement variable order, large folios for anonymous >> memory. (currently called "FLEXIBLE_THP") The objective of this is to improve >> performance by allocating larger chunks of memory during anonymous page faults. >> See [1] and [2] for background. > > A question for anyone that can help; I'm preparing v4 and as part of that am > running the mm selftests, now that I've fixed them up to run reliably for > arm64. This is showing 2 regressions vs the v6.5-rc3 baseline: > > 1) khugepaged test fails here: > # Run test: collapse_max_ptes_none (khugepaged:anon) > # Maybe collapse with max_ptes_none exceeded.... Fail > # Unexpected huge page > > 2) split_huge_page_test fails with: > # Still AnonHugePages not split > > I *think* (but haven't yet verified) that (1) is due to khugepaged ignoring > non-order-0 folios when looking for candidates to collapse. Now that we have > large anon folios, the memory allocated by the test is in large folios and > therefore does not get collapsed. We understand this issue, and I believe > DavidH's new scheme for determining exclusive vs shared should give us the tools > to solve this. > > But (2) is weird. If I run this test on its own immediately after booting, it > passes. If I then run the khugepaged test, then re-run this test, it fails. > > The test is allocating 4 hugepages, then requesting they are split using the > debugfs interface. Then the test looks at /proc/self/smaps to check that > AnonHugePages is back to 0. > > In both the passing and failing cases, the kernel thinks that it has > successfully split the pages; the debug logs in split_huge_pages_pid() confirm > this. In the failing case, I wonder if somehow khugepaged could be immediately > re-collapsing the pages before user sapce can observe the split? Perhaps the > failed khugepaged test has left khugepaged in an "awake" state and it > immediately pounces? This is more likely to be a stats issue. Have you checked smap to see if AnonHugePages is 0 KB by placing a getchar() before the exit(EXIT_FAILURE)? Since split_huge_page_test checks that stats to make sure the split indeed happened. -- Best Regards, Yan, Zi
On 24/07/2023 15:58, Zi Yan wrote: > On 24 Jul 2023, at 7:59, Ryan Roberts wrote: > >> On 14/07/2023 17:04, Ryan Roberts wrote: >>> Hi All, >>> >>> This is v3 of a series to implement variable order, large folios for anonymous >>> memory. (currently called "FLEXIBLE_THP") The objective of this is to improve >>> performance by allocating larger chunks of memory during anonymous page faults. >>> See [1] and [2] for background. >> >> A question for anyone that can help; I'm preparing v4 and as part of that am >> running the mm selftests, now that I've fixed them up to run reliably for >> arm64. This is showing 2 regressions vs the v6.5-rc3 baseline: >> >> 1) khugepaged test fails here: >> # Run test: collapse_max_ptes_none (khugepaged:anon) >> # Maybe collapse with max_ptes_none exceeded.... Fail >> # Unexpected huge page >> >> 2) split_huge_page_test fails with: >> # Still AnonHugePages not split >> >> I *think* (but haven't yet verified) that (1) is due to khugepaged ignoring >> non-order-0 folios when looking for candidates to collapse. Now that we have >> large anon folios, the memory allocated by the test is in large folios and >> therefore does not get collapsed. We understand this issue, and I believe >> DavidH's new scheme for determining exclusive vs shared should give us the tools >> to solve this. >> >> But (2) is weird. If I run this test on its own immediately after booting, it >> passes. If I then run the khugepaged test, then re-run this test, it fails. >> >> The test is allocating 4 hugepages, then requesting they are split using the >> debugfs interface. Then the test looks at /proc/self/smaps to check that >> AnonHugePages is back to 0. >> >> In both the passing and failing cases, the kernel thinks that it has >> successfully split the pages; the debug logs in split_huge_pages_pid() confirm >> this. In the failing case, I wonder if somehow khugepaged could be immediately >> re-collapsing the pages before user sapce can observe the split? Perhaps the >> failed khugepaged test has left khugepaged in an "awake" state and it >> immediately pounces? > > This is more likely to be a stats issue. Have you checked smap to see if > AnonHugePages is 0 KB by placing a getchar() before the exit(EXIT_FAILURE)? Yes - its still 8192K. But looking at the code that value is determined from the fact that there is a PMD block mapping present. And the split definitely succeeded so something must have re-collapsed it. Looking into the khugepaged test suite, it saves the thp and khugepaged settings out of sysfs, modifies them for the tests, then restores them when finished. But it doesn't restore if exiting early (due to failure). It changes the settings for alloc_sleep_millisecs and scan_sleep_millisecs from a large number of seconds to 10 ms, for example. So I'm pretty sure this is the culprit. > Since split_huge_page_test checks that stats to make sure the split indeed > happened. > > -- > Best Regards, > Yan, Zi
Ryan, Do you have a kselfrest code for this new feature? I’d like to test it out on FVP when I have the chance. On Tue, Jul 25, 2023 at 0:42 Ryan Roberts <ryan.roberts@arm.com> wrote: > On 24/07/2023 15:58, Zi Yan wrote: > > On 24 Jul 2023, at 7:59, Ryan Roberts wrote: > > > >> On 14/07/2023 17:04, Ryan Roberts wrote: > >>> Hi All, > >>> > >>> This is v3 of a series to implement variable order, large folios for > anonymous > >>> memory. (currently called "FLEXIBLE_THP") The objective of this is to > improve > >>> performance by allocating larger chunks of memory during anonymous > page faults. > >>> See [1] and [2] for background. > >> > >> A question for anyone that can help; I'm preparing v4 and as part of > that am > >> running the mm selftests, now that I've fixed them up to run reliably > for > >> arm64. This is showing 2 regressions vs the v6.5-rc3 baseline: > >> > >> 1) khugepaged test fails here: > >> # Run test: collapse_max_ptes_none (khugepaged:anon) > >> # Maybe collapse with max_ptes_none exceeded.... Fail > >> # Unexpected huge page > >> > >> 2) split_huge_page_test fails with: > >> # Still AnonHugePages not split > >> > >> I *think* (but haven't yet verified) that (1) is due to khugepaged > ignoring > >> non-order-0 folios when looking for candidates to collapse. Now that we > have > >> large anon folios, the memory allocated by the test is in large folios > and > >> therefore does not get collapsed. We understand this issue, and I > believe > >> DavidH's new scheme for determining exclusive vs shared should give us > the tools > >> to solve this. > >> > >> But (2) is weird. If I run this test on its own immediately after > booting, it > >> passes. If I then run the khugepaged test, then re-run this test, it > fails. > >> > >> The test is allocating 4 hugepages, then requesting they are split > using the > >> debugfs interface. Then the test looks at /proc/self/smaps to check that > >> AnonHugePages is back to 0. > >> > >> In both the passing and failing cases, the kernel thinks that it has > >> successfully split the pages; the debug logs in split_huge_pages_pid() > confirm > >> this. In the failing case, I wonder if somehow khugepaged could be > immediately > >> re-collapsing the pages before user sapce can observe the split? > Perhaps the > >> failed khugepaged test has left khugepaged in an "awake" state and it > >> immediately pounces? > > > > This is more likely to be a stats issue. Have you checked smap to see if > > AnonHugePages is 0 KB by placing a getchar() before the > exit(EXIT_FAILURE)? > > Yes - its still 8192K. But looking at the code that value is determined > from the > fact that there is a PMD block mapping present. And the split definitely > succeeded so something must have re-collapsed it. > > Looking into the khugepaged test suite, it saves the thp and khugepaged > settings > out of sysfs, modifies them for the tests, then restores them when > finished. But > it doesn't restore if exiting early (due to failure). It changes the > settings > for alloc_sleep_millisecs and scan_sleep_millisecs from a large number of > seconds to 10 ms, for example. So I'm pretty sure this is the culprit. > > > > Since split_huge_page_test checks that stats to make sure the split > indeed > > happened. > > > > -- > > Best Regards, > > Yan, Zi > > > _______________________________________________ > linux-arm-kernel mailing list > linux-arm-kernel@lists.infradead.org > http://lists.infradead.org/mailman/listinfo/linux-arm-kernel >
On 26/07/2023 08:36, Itaru Kitayama wrote: > Ryan, > Do you have a kselfrest code for this new feature? > I’d like to test it out on FVP when I have the chance. A very timely question! I have modified the mm/cow tests to additionally test large anon folios. That patch is part of v4, which I am about to start writing the cover letter for. So look out for that in around an hour.
Awesome, thanks! On Wed, Jul 26, 2023 at 17:42 Ryan Roberts <ryan.roberts@arm.com> wrote: > On 26/07/2023 08:36, Itaru Kitayama wrote: > > Ryan, > > Do you have a kselfrest code for this new feature? > > I’d like to test it out on FVP when I have the chance. > > A very timely question! I have modified the mm/cow tests to additionally > test > large anon folios. That patch is part of v4, which I am about to start > writing > the cover letter for. So look out for that in around an hour. >