mbox series

[RFC,V1,00/13] mm: slowtier page promotion based on PTE A bit

Message ID 20250319193028.29514-1-raghavendra.kt@amd.com (mailing list archive)
Headers show
Series mm: slowtier page promotion based on PTE A bit | expand

Message

Raghavendra K T March 19, 2025, 7:30 p.m. UTC
Introduction:
=============
In the current hot page promotion, all the activities including the
process address space scanning, NUMA hint fault handling and page
migration is performed in the process context. i.e., scanning overhead is
borne by applications.

This is RFC V1 patch series to do (slow tier) CXL page promotion.
The approach in this patchset assists/addresses the issue by adding PTE
Accessed bit scanning.

Scanning is done by a global kernel thread which routinely scans all
the processes' address spaces and checks for accesses by reading the
PTE A bit. 

A separate migration thread migrates/promotes the pages to the toptier
node based on a simple heuristic that uses toptier scan/access information
of the mm.

Additionally based on the feedback for RFC V0 [4], a prctl knob with
a scalar value is provided to control per task scanning.

Initial results show promising number on a microbenchmark. Soon
will get numbers with real benchmarks and findings (tunings). 

Experiment:
============
Abench microbenchmark,
- Allocates 8GB/16GB/32GB/64GB of memory on CXL node
- 64 threads created, and each thread randomly accesses pages in 4K
  granularity.
- 512 iterations with a delay of 1 us between two successive iterations.

SUT: 512 CPU, 2 node 256GB, AMD EPYC.

3 runs, command:  abench -m 2 -d 1 -i 512 -s <size>

Calculates how much time is taken to complete the task, lower is better.
Expectation is CXL node memory is expected to be migrated as fast as
possible.

Base case: 6.14-rc6    w/ numab mode = 2 (hot page promotion is enabled).
patched case: 6.14-rc6 w/ numab mode = 1 (numa balancing is enabled).
we expect daemon to do page promotion.

Result:
========
         base NUMAB2                    patched NUMAB1
         time in sec  (%stdev)   time in sec  (%stdev)     %gain
 8GB     134.33       ( 0.19 )        120.52  ( 0.21 )     10.28
16GB     292.24       ( 0.60 )        275.97  ( 0.18 )      5.56
32GB     585.06       ( 0.24 )        546.49  ( 0.35 )      6.59
64GB    1278.98       ( 0.27 )       1205.20  ( 2.29 )      5.76

Base case: 6.14-rc6    w/ numab mode = 1 (numa balancing is enabled).
patched case: 6.14-rc6 w/ numab mode = 1 (numa balancing is enabled).
         base NUMAB1                    patched NUMAB1
         time in sec  (%stdev)   time in sec  (%stdev)     %gain
 8GB     186.71       ( 0.99 )        120.52  ( 0.21 )     35.45 
16GB     376.09       ( 0.46 )        275.97  ( 0.18 )     26.62 
32GB     744.37       ( 0.71 )        546.49  ( 0.35 )     26.58 
64GB    1534.49       ( 0.09 )       1205.20  ( 2.29 )     21.45


Major Changes since V0:
======================
- A separate migration thread is used for migration, thus alleviating need for
  multi-threaded scanning (atleast as per tracing).

- A simple heuristic for target node calculation is added.

- prctl (David R) interface with scalar value is added to control per task scanning.

- Steve's comment on tracing incorporated.

- Davidlohr's reported bugfix.

- Initial scan delay similar to NUMAB1 mode added.

- Got rid of migration lock during mm_walk.

PS: Occassionally I do see if scanning is too fast compared to migration,
scanning can stall waiting for lock. Should be fixed in next version by
using memslot for migration..

Disclaimer, Takeaways and discussion points and future TODOs 
==============================================================
1) Source code, patch seggregation still to be improved, current patchset only
provides a skeleton.

2) Unification of source of hotness is not easy (as mentioned perhaps by Jonathan)
but perhaps all the consumers/producers can work coopertaively.

Scanning:
3) Major positive: Current patchset is able to cover all the process address
space scanning effectively with simple algorithms to tune scan_size and scan_period.

4) Effective tracking of folio's or address space using / or ideas used in DAMON
is yet to be explored fully.

5) Use timestamp information-based migration (Similar to numab mode=2).
instead of migrating immediately when PTE A bit set.
(cons:
 - It will not be accurate since it is done outside of process
context.
 - Performance benefit may be lost.)

Migration:

6) Currently fast scanner can bombard migration list, need to maintain migration list in a more
organized way (for e.g. using memslot, so that it is also helpful in maintaining recency, frequency
information (similar to kpromoted posted by Bharata)

7) NUMAB2 throttling is very effective, we would need a common interface to control migration
and also exploit batch migration.

Thanks to Bharata, Joannes, Gregory, SJ, Chris, David Rientjes, Jonathan, John Hubbard,
Davidlohr, Ying, Willy, Hyeonggon Yoo and many of you for your valuable comments and support.

Links:
[1] https://lore.kernel.org/lkml/20241127082201.1276-1-gourry@gourry.net/
[2] kstaled: https://lore.kernel.org/lkml/1317170947-17074-3-git-send-email-walken@google.com/#r
[3] https://lore.kernel.org/lkml/Y+Pj+9bbBbHpf6xM@hirez.programming.kicks-ass.net/
[4] RFC V0: https://lore.kernel.org/all/20241201153818.2633616-1-raghavendra.kt@amd.com/
[5] Recap: https://lore.kernel.org/linux-mm/20241226012833.rmmbkws4wdhzdht6@ed.ac.uk/T/
[6] LSFMM: https://lore.kernel.org/linux-mm/20250123105721.424117-1-raghavendra.kt@amd.com/#r
[7] LSFMM: https://lore.kernel.org/linux-mm/20250131130901.00000dd1@huawei.com/

I might have CCed more people or less people than needed
unintentionally.

Patch organization:
patch 1-4 initial skeleton for scanning and migration
patch 5: migration
patch 6-8: scanning optimizations
patch 9: target_node heuristic
patch 10-12: sysfs, vmstat and tracing
patch 13: A basic prctl implementation.

Raghavendra K T (13):
  mm: Add kmmscand kernel daemon
  mm: Maintain mm_struct list in the system
  mm: Scan the mm and create a migration list
  mm: Create a separate kernel thread for migration
  mm/migration: Migrate accessed folios to toptier node
  mm: Add throttling of mm scanning using scan_period
  mm: Add throttling of mm scanning using scan_size
  mm: Add initial scan delay
  mm: Add heuristic to calculate target node
  sysfs: Add sysfs support to tune scanning
  vmstat: Add vmstat counters
  trace/kmmscand: Add tracing of scanning and migration
  prctl: Introduce new prctl to control scanning

 Documentation/filesystems/proc.rst |    2 +
 fs/exec.c                          |    4 +
 fs/proc/task_mmu.c                 |    4 +
 include/linux/kmmscand.h           |   31 +
 include/linux/migrate.h            |    2 +
 include/linux/mm.h                 |   11 +
 include/linux/mm_types.h           |    7 +
 include/linux/vm_event_item.h      |   10 +
 include/trace/events/kmem.h        |   90 ++
 include/uapi/linux/prctl.h         |    7 +
 kernel/fork.c                      |    8 +
 kernel/sys.c                       |   25 +
 mm/Kconfig                         |    8 +
 mm/Makefile                        |    1 +
 mm/kmmscand.c                      | 1515 ++++++++++++++++++++++++++++
 mm/migrate.c                       |    2 +-
 mm/vmstat.c                        |   10 +
 17 files changed, 1736 insertions(+), 1 deletion(-)
 create mode 100644 include/linux/kmmscand.h
 create mode 100644 mm/kmmscand.c


base-commit: b7f94fcf55469ad3ef8a74c35b488dbfa314d1bb

Comments

Davidlohr Bueso March 19, 2025, 11 p.m. UTC | #1
On Wed, 19 Mar 2025, Raghavendra K T wrote:

>Introduction:
>=============
>In the current hot page promotion, all the activities including the
>process address space scanning, NUMA hint fault handling and page
>migration is performed in the process context. i.e., scanning overhead is
>borne by applications.
>
>This is RFC V1 patch series to do (slow tier) CXL page promotion.
>The approach in this patchset assists/addresses the issue by adding PTE
>Accessed bit scanning.
>
>Scanning is done by a global kernel thread which routinely scans all
>the processes' address spaces and checks for accesses by reading the
>PTE A bit.
>
>A separate migration thread migrates/promotes the pages to the toptier
>node based on a simple heuristic that uses toptier scan/access information
>of the mm.
>
>Additionally based on the feedback for RFC V0 [4], a prctl knob with
>a scalar value is provided to control per task scanning.
>
>Initial results show promising number on a microbenchmark. Soon
>will get numbers with real benchmarks and findings (tunings).
>
>Experiment:
>============
>Abench microbenchmark,
>- Allocates 8GB/16GB/32GB/64GB of memory on CXL node
>- 64 threads created, and each thread randomly accesses pages in 4K
>  granularity.
>- 512 iterations with a delay of 1 us between two successive iterations.
>
>SUT: 512 CPU, 2 node 256GB, AMD EPYC.
>
>3 runs, command:  abench -m 2 -d 1 -i 512 -s <size>
>
>Calculates how much time is taken to complete the task, lower is better.
>Expectation is CXL node memory is expected to be migrated as fast as
>possible.
>
>Base case: 6.14-rc6    w/ numab mode = 2 (hot page promotion is enabled).
>patched case: 6.14-rc6 w/ numab mode = 1 (numa balancing is enabled).
>we expect daemon to do page promotion.
>
>Result:
>========
>         base NUMAB2                    patched NUMAB1
>         time in sec  (%stdev)   time in sec  (%stdev)     %gain
> 8GB     134.33       ( 0.19 )        120.52  ( 0.21 )     10.28
>16GB     292.24       ( 0.60 )        275.97  ( 0.18 )      5.56
>32GB     585.06       ( 0.24 )        546.49  ( 0.35 )      6.59
>64GB    1278.98       ( 0.27 )       1205.20  ( 2.29 )      5.76
>
>Base case: 6.14-rc6    w/ numab mode = 1 (numa balancing is enabled).
>patched case: 6.14-rc6 w/ numab mode = 1 (numa balancing is enabled).
>         base NUMAB1                    patched NUMAB1
>         time in sec  (%stdev)   time in sec  (%stdev)     %gain
> 8GB     186.71       ( 0.99 )        120.52  ( 0.21 )     35.45
>16GB     376.09       ( 0.46 )        275.97  ( 0.18 )     26.62
>32GB     744.37       ( 0.71 )        546.49  ( 0.35 )     26.58
>64GB    1534.49       ( 0.09 )       1205.20  ( 2.29 )     21.45

Very promising, but a few things. A more fair comparison would be
vs kpromoted using the PROT_NONE of NUMAB2. Essentially disregarding
the asynchronous migration, and effectively measuring synchronous
vs asynchronous scanning overhead and implied semantics. Essentially
save the extra kthread and only have a per-NUMA node migrator, which
is the common denominator for all these sources of hotness.

Similarly, while I don't see any users disabling NUMAB1 _and_ enabling
this sort of thing, it would be useful to have data on no numa balancing
at all. If nothing else, that would measure the effects of the dest
node heuristics.

Also, data/workload involving demotion would also be good to have for
a more complete picture.

>
>Major Changes since V0:
>======================
>- A separate migration thread is used for migration, thus alleviating need for
>  multi-threaded scanning (atleast as per tracing).
>
>- A simple heuristic for target node calculation is added.
>
>- prctl (David R) interface with scalar value is added to control per task scanning.
>
>- Steve's comment on tracing incorporated.
>
>- Davidlohr's reported bugfix.
>
>- Initial scan delay similar to NUMAB1 mode added.
>
>- Got rid of migration lock during mm_walk.
>
>PS: Occassionally I do see if scanning is too fast compared to migration,
>scanning can stall waiting for lock. Should be fixed in next version by
>using memslot for migration..
>
>Disclaimer, Takeaways and discussion points and future TODOs
>==============================================================
>1) Source code, patch seggregation still to be improved, current patchset only
>provides a skeleton.
>
>2) Unification of source of hotness is not easy (as mentioned perhaps by Jonathan)
>but perhaps all the consumers/producers can work coopertaively.
>
>Scanning:
>3) Major positive: Current patchset is able to cover all the process address
>space scanning effectively with simple algorithms to tune scan_size and scan_period.
>
>4) Effective tracking of folio's or address space using / or ideas used in DAMON
>is yet to be explored fully.
>
>5) Use timestamp information-based migration (Similar to numab mode=2).
>instead of migrating immediately when PTE A bit set.
>(cons:
> - It will not be accurate since it is done outside of process
>context.
> - Performance benefit may be lost.)
>
>Migration:
>
>6) Currently fast scanner can bombard migration list, need to maintain migration list in a more
>organized way (for e.g. using memslot, so that it is also helpful in maintaining recency, frequency
>information (similar to kpromoted posted by Bharata)
>
>7) NUMAB2 throttling is very effective, we would need a common interface to control migration
>and also exploit batch migration.

Does NUMAB2 continue to exist? Are there any benefits in having two sources?

Thanks,
Davidlohr

>
>Thanks to Bharata, Joannes, Gregory, SJ, Chris, David Rientjes, Jonathan, John Hubbard,
>Davidlohr, Ying, Willy, Hyeonggon Yoo and many of you for your valuable comments and support.
>
>Links:
>[1] https://lore.kernel.org/lkml/20241127082201.1276-1-gourry@gourry.net/
>[2] kstaled: https://lore.kernel.org/lkml/1317170947-17074-3-git-send-email-walken@google.com/#r
>[3] https://lore.kernel.org/lkml/Y+Pj+9bbBbHpf6xM@hirez.programming.kicks-ass.net/
>[4] RFC V0: https://lore.kernel.org/all/20241201153818.2633616-1-raghavendra.kt@amd.com/
>[5] Recap: https://lore.kernel.org/linux-mm/20241226012833.rmmbkws4wdhzdht6@ed.ac.uk/T/
>[6] LSFMM: https://lore.kernel.org/linux-mm/20250123105721.424117-1-raghavendra.kt@amd.com/#r
>[7] LSFMM: https://lore.kernel.org/linux-mm/20250131130901.00000dd1@huawei.com/
>
>I might have CCed more people or less people than needed
>unintentionally.
>
>Patch organization:
>patch 1-4 initial skeleton for scanning and migration
>patch 5: migration
>patch 6-8: scanning optimizations
>patch 9: target_node heuristic
>patch 10-12: sysfs, vmstat and tracing
>patch 13: A basic prctl implementation.
>
>Raghavendra K T (13):
>  mm: Add kmmscand kernel daemon
>  mm: Maintain mm_struct list in the system
>  mm: Scan the mm and create a migration list
>  mm: Create a separate kernel thread for migration
>  mm/migration: Migrate accessed folios to toptier node
>  mm: Add throttling of mm scanning using scan_period
>  mm: Add throttling of mm scanning using scan_size
>  mm: Add initial scan delay
>  mm: Add heuristic to calculate target node
>  sysfs: Add sysfs support to tune scanning
>  vmstat: Add vmstat counters
>  trace/kmmscand: Add tracing of scanning and migration
>  prctl: Introduce new prctl to control scanning
>
> Documentation/filesystems/proc.rst |    2 +
> fs/exec.c                          |    4 +
> fs/proc/task_mmu.c                 |    4 +
> include/linux/kmmscand.h           |   31 +
> include/linux/migrate.h            |    2 +
> include/linux/mm.h                 |   11 +
> include/linux/mm_types.h           |    7 +
> include/linux/vm_event_item.h      |   10 +
> include/trace/events/kmem.h        |   90 ++
> include/uapi/linux/prctl.h         |    7 +
> kernel/fork.c                      |    8 +
> kernel/sys.c                       |   25 +
> mm/Kconfig                         |    8 +
> mm/Makefile                        |    1 +
> mm/kmmscand.c                      | 1515 ++++++++++++++++++++++++++++
> mm/migrate.c                       |    2 +-
> mm/vmstat.c                        |   10 +
> 17 files changed, 1736 insertions(+), 1 deletion(-)
> create mode 100644 include/linux/kmmscand.h
> create mode 100644 mm/kmmscand.c
>
>
>base-commit: b7f94fcf55469ad3ef8a74c35b488dbfa314d1bb
>--
>2.34.1
>
Raghavendra K T March 20, 2025, 8:51 a.m. UTC | #2
On 3/20/2025 4:30 AM, Davidlohr Bueso wrote:
> On Wed, 19 Mar 2025, Raghavendra K T wrote:
> 
>> Introduction:
>> =============
>> In the current hot page promotion, all the activities including the
>> process address space scanning, NUMA hint fault handling and page
>> migration is performed in the process context. i.e., scanning overhead is
>> borne by applications.
>>
>> This is RFC V1 patch series to do (slow tier) CXL page promotion.
>> The approach in this patchset assists/addresses the issue by adding PTE
>> Accessed bit scanning.
>>
>> Scanning is done by a global kernel thread which routinely scans all
>> the processes' address spaces and checks for accesses by reading the
>> PTE A bit.
>>
>> A separate migration thread migrates/promotes the pages to the toptier
>> node based on a simple heuristic that uses toptier scan/access 
>> information
>> of the mm.
>>
>> Additionally based on the feedback for RFC V0 [4], a prctl knob with
>> a scalar value is provided to control per task scanning.
>>
>> Initial results show promising number on a microbenchmark. Soon
>> will get numbers with real benchmarks and findings (tunings).
>>
>> Experiment:
>> ============
>> Abench microbenchmark,
>> - Allocates 8GB/16GB/32GB/64GB of memory on CXL node
>> - 64 threads created, and each thread randomly accesses pages in 4K
>>  granularity.
>> - 512 iterations with a delay of 1 us between two successive iterations.
>>
>> SUT: 512 CPU, 2 node 256GB, AMD EPYC.
>>
>> 3 runs, command:  abench -m 2 -d 1 -i 512 -s <size>
>>
>> Calculates how much time is taken to complete the task, lower is better.
>> Expectation is CXL node memory is expected to be migrated as fast as
>> possible.
>>
>> Base case: 6.14-rc6    w/ numab mode = 2 (hot page promotion is enabled).
>> patched case: 6.14-rc6 w/ numab mode = 1 (numa balancing is enabled).
>> we expect daemon to do page promotion.
>>
>> Result:
>> ========
>>         base NUMAB2                    patched NUMAB1
>>         time in sec  (%stdev)   time in sec  (%stdev)     %gain
>> 8GB     134.33       ( 0.19 )        120.52  ( 0.21 )     10.28
>> 16GB     292.24       ( 0.60 )        275.97  ( 0.18 )      5.56
>> 32GB     585.06       ( 0.24 )        546.49  ( 0.35 )      6.59
>> 64GB    1278.98       ( 0.27 )       1205.20  ( 2.29 )      5.76
>>
>> Base case: 6.14-rc6    w/ numab mode = 1 (numa balancing is enabled).
>> patched case: 6.14-rc6 w/ numab mode = 1 (numa balancing is enabled).
>>         base NUMAB1                    patched NUMAB1
>>         time in sec  (%stdev)   time in sec  (%stdev)     %gain
>> 8GB     186.71       ( 0.99 )        120.52  ( 0.21 )     35.45
>> 16GB     376.09       ( 0.46 )        275.97  ( 0.18 )     26.62
>> 32GB     744.37       ( 0.71 )        546.49  ( 0.35 )     26.58
>> 64GB    1534.49       ( 0.09 )       1205.20  ( 2.29 )     21.45
> 
> Very promising, but a few things. A more fair comparison would be
> vs kpromoted using the PROT_NONE of NUMAB2. Essentially disregarding
> the asynchronous migration, and effectively measuring synchronous
> vs asynchronous scanning overhead and implied semantics. Essentially
> save the extra kthread and only have a per-NUMA node migrator, which
> is the common denominator for all these sources of hotness.


Yes, I agree that fair comparison would be
1) kmmscand generating data on pages to be promoted working with
kpromoted asynchronously migrating
VS
2) NUMAB2 generating data on pages to be migrated integrated with
kpromoted.

As Bharata already mentioned, we tried integrating kpromoted with
kmmscand generated migration list, But kmmscand generates huge amount of
scanned page data, and need to be organized better so that kpromted can 
handle the migration effectively.

(2) We have not tried it yet, will get back on the possibility (and also
numbers when both are ready).

> 
> Similarly, while I don't see any users disabling NUMAB1 _and_ enabling
> this sort of thing, it would be useful to have data on no numa balancing
> at all. If nothing else, that would measure the effects of the dest
> node heuristics.

Last time when I checked, with patch, numbers with NUMAB=0 and NUMAB=1
was not making much difference in 8GB case because most of the migration 
was handled by kmmscand. It is because before NUMAB=1 learns and tries
to migrate, kmmscand would have already migrated.

But a longer running/ more memory workload may make more difference.
I will comeback with that number.

> 
> Also, data/workload involving demotion would also be good to have for
> a more complete picture.
>

Agree.
additionally we need to handle various cases like
  - Should we choose second best target node when first node is full?
    >>
>> Major Changes since V0:
>> ======================
>> - A separate migration thread is used for migration, thus alleviating 
>> need for
>>  multi-threaded scanning (atleast as per tracing).
>>
>> - A simple heuristic for target node calculation is added.
>>
>> - prctl (David R) interface with scalar value is added to control per 
>> task scanning.
>>
>> - Steve's comment on tracing incorporated.
>>
>> - Davidlohr's reported bugfix.
>>
>> - Initial scan delay similar to NUMAB1 mode added.
>>
>> - Got rid of migration lock during mm_walk.
>>
>> PS: Occassionally I do see if scanning is too fast compared to migration,
>> scanning can stall waiting for lock. Should be fixed in next version by
>> using memslot for migration..
>>
>> Disclaimer, Takeaways and discussion points and future TODOs
>> ==============================================================
>> 1) Source code, patch seggregation still to be improved, current 
>> patchset only
>> provides a skeleton.
>>
>> 2) Unification of source of hotness is not easy (as mentioned perhaps 
>> by Jonathan)
>> but perhaps all the consumers/producers can work coopertaively.
>>
>> Scanning:
>> 3) Major positive: Current patchset is able to cover all the process 
>> address
>> space scanning effectively with simple algorithms to tune scan_size 
>> and scan_period.
>>
>> 4) Effective tracking of folio's or address space using / or ideas 
>> used in DAMON
>> is yet to be explored fully.
>>
>> 5) Use timestamp information-based migration (Similar to numab mode=2).
>> instead of migrating immediately when PTE A bit set.
>> (cons:
>> - It will not be accurate since it is done outside of process
>> context.
>> - Performance benefit may be lost.)
>>
>> Migration:
>>
>> 6) Currently fast scanner can bombard migration list, need to maintain 
>> migration list in a more
>> organized way (for e.g. using memslot, so that it is also helpful in 
>> maintaining recency, frequency
>> information (similar to kpromoted posted by Bharata)
>>
>> 7) NUMAB2 throttling is very effective, we would need a common 
>> interface to control migration
>> and also exploit batch migration.
> 
> Does NUMAB2 continue to exist? Are there any benefits in having two 
> sources?
> 

I think there is surely a benefit in having two sources.
NUMAB2 is more accurate but slow learning.

IBS: No scan overhead but we need more sampledata.

PTE A bit: more scanning overhead (but was not much significant to
impact performance when compared with NUMAB1/NUMAB2, rather it was more
performing because of proactive migration) but has less accurate data on
hotness, target_node(?).

When system is more stable, IBS was more effective.
PTE A bit and NUMAB was effective when we needed more aggressive
migration  (in that order).

- Raghu
Raghavendra K T March 20, 2025, 7:11 p.m. UTC | #3
On 3/20/2025 2:21 PM, Raghavendra K T wrote:
> On 3/20/2025 4:30 AM, Davidlohr Bueso wrote:
>> On Wed, 19 Mar 2025, Raghavendra K T wrote:
>>
>>> Introduction:
>>> =============
>>> In the current hot page promotion, all the activities including the
>>> process address space scanning, NUMA hint fault handling and page
>>> migration is performed in the process context. i.e., scanning 
>>> overhead is
>>> borne by applications.
>>>
>>> This is RFC V1 patch series to do (slow tier) CXL page promotion.
>>> The approach in this patchset assists/addresses the issue by adding PTE
>>> Accessed bit scanning.
>>>
>>> Scanning is done by a global kernel thread which routinely scans all
>>> the processes' address spaces and checks for accesses by reading the
>>> PTE A bit.
>>>
>>> A separate migration thread migrates/promotes the pages to the toptier
>>> node based on a simple heuristic that uses toptier scan/access 
>>> information
>>> of the mm.
>>>
>>> Additionally based on the feedback for RFC V0 [4], a prctl knob with
>>> a scalar value is provided to control per task scanning.
>>>
>>> Initial results show promising number on a microbenchmark. Soon
>>> will get numbers with real benchmarks and findings (tunings).
>>>
>>> Experiment:
>>> ============
>>> Abench microbenchmark,
>>> - Allocates 8GB/16GB/32GB/64GB of memory on CXL node
>>> - 64 threads created, and each thread randomly accesses pages in 4K
>>>  granularity.
>>> - 512 iterations with a delay of 1 us between two successive iterations.
>>>
>>> SUT: 512 CPU, 2 node 256GB, AMD EPYC.
>>>
>>> 3 runs, command:  abench -m 2 -d 1 -i 512 -s <size>
>>>
>>> Calculates how much time is taken to complete the task, lower is better.
>>> Expectation is CXL node memory is expected to be migrated as fast as
>>> possible.
>>>
>>> Base case: 6.14-rc6    w/ numab mode = 2 (hot page promotion is 
>>> enabled).
>>> patched case: 6.14-rc6 w/ numab mode = 1 (numa balancing is enabled).
>>> we expect daemon to do page promotion.
>>>
>>> Result:
>>> ========
>>>         base NUMAB2                    patched NUMAB1
>>>         time in sec  (%stdev)   time in sec  (%stdev)     %gain
>>> 8GB     134.33       ( 0.19 )        120.52  ( 0.21 )     10.28
>>> 16GB     292.24       ( 0.60 )        275.97  ( 0.18 )      5.56
>>> 32GB     585.06       ( 0.24 )        546.49  ( 0.35 )      6.59
>>> 64GB    1278.98       ( 0.27 )       1205.20  ( 2.29 )      5.76
>>>
>>> Base case: 6.14-rc6    w/ numab mode = 1 (numa balancing is enabled).
>>> patched case: 6.14-rc6 w/ numab mode = 1 (numa balancing is enabled).
>>>         base NUMAB1                    patched NUMAB1
>>>         time in sec  (%stdev)   time in sec  (%stdev)     %gain
>>> 8GB     186.71       ( 0.99 )        120.52  ( 0.21 )     35.45
>>> 16GB     376.09       ( 0.46 )        275.97  ( 0.18 )     26.62
>>> 32GB     744.37       ( 0.71 )        546.49  ( 0.35 )     26.58
>>> 64GB    1534.49       ( 0.09 )       1205.20  ( 2.29 )     21.45
>>
>> Very promising, but a few things. A more fair comparison would be
>> vs kpromoted using the PROT_NONE of NUMAB2. Essentially disregarding
>> the asynchronous migration, and effectively measuring synchronous
>> vs asynchronous scanning overhead and implied semantics. Essentially
>> save the extra kthread and only have a per-NUMA node migrator, which
>> is the common denominator for all these sources of hotness.
> 
> 
> Yes, I agree that fair comparison would be
> 1) kmmscand generating data on pages to be promoted working with
> kpromoted asynchronously migrating
> VS
> 2) NUMAB2 generating data on pages to be migrated integrated with
> kpromoted.
> 
> As Bharata already mentioned, we tried integrating kpromoted with
> kmmscand generated migration list, But kmmscand generates huge amount of
> scanned page data, and need to be organized better so that kpromted can 
> handle the migration effectively.
> 
> (2) We have not tried it yet, will get back on the possibility (and also
> numbers when both are ready).
> 
>>
>> Similarly, while I don't see any users disabling NUMAB1 _and_ enabling
>> this sort of thing, it would be useful to have data on no numa balancing
>> at all. If nothing else, that would measure the effects of the dest
>> node heuristics.
> 
> Last time when I checked, with patch, numbers with NUMAB=0 and NUMAB=1
> was not making much difference in 8GB case because most of the migration 
> was handled by kmmscand. It is because before NUMAB=1 learns and tries
> to migrate, kmmscand would have already migrated.
> 
> But a longer running/ more memory workload may make more difference.
> I will comeback with that number.

                  base NUMAB=2   Patched NUMAB=0
                  time in sec    time in sec
===================================================
8G:              134.33 (0.19)   119.88 ( 0.25)
16G:             292.24 (0.60)   325.06 (11.11)
32G:             585.06 (0.24)   546.15 ( 0.50)
64G:            1278.98 (0.27)  1221.41 ( 1.54)

We can see that numbers have not changed much between NUMAB=1 NUMAB=0 in
patched case.

PS: for 16G there was a bad case where a rare contention happen for lock
for same mm. that we can see from stdev, which should be taken care in
next version.

[...]
Davidlohr Bueso March 20, 2025, 9:50 p.m. UTC | #4
On Thu, 20 Mar 2025, Raghavendra K T wrote:

>>Does NUMAB2 continue to exist? Are there any benefits in having two
>>sources?
>>
>
>I think there is surely a benefit in having two sources.

I think I was a bit vague. What I'm really asking is if the scanning is
done async (kmmscand), should NUMAB2 also exist as a source and also feed
into the migrator? Looking at it differently, I guess doing so would allow
additional flexibility in choosing what to use.

>NUMAB2 is more accurate but slow learning.

Yes. Which is also why it is important to have demotion in the picture to
measure the ping pong effect. LRU based heuristics work best here.

>IBS: No scan overhead but we need more sampledata.

>PTE A bit: more scanning overhead (but was not much significant to
>impact performance when compared with NUMAB1/NUMAB2, rather it was more
>performing because of proactive migration) but has less accurate data on
>hotness, target_node(?).
>
>When system is more stable, IBS was more effective.

IBS will never be as effective as it should be simply because of the lack
of time decay/frequency (hence all that related phi hackery in the kpromoted
series). It has a global view of memory, it should beat any sw scanning
heuristics by far but the numbers have lacked.

As you know, PeterZ, Dave Hansen, Ying and I have expressed concerns about
this in the past. But that is not to say it does not serve as a source,
as you point out.

Thanks,
Davidlohr