Message ID | 20230310182851.2579138-1-shr@devkernel.io (mailing list archive) |
---|---|
Headers | show |
Series | mm: process/cgroup ksm support | expand |
On 10.03.23 19:28, Stefan Roesch wrote: > So far KSM can only be enabled by calling madvise for memory regions. To > be able to use KSM for more workloads, KSM needs to have the ability to be > enabled / disabled at the process / cgroup level. > > Use case 1: > The madvise call is not available in the programming language. An example for > this are programs with forked workloads using a garbage collected language without > pointers. In such a language madvise cannot be made available. > > In addition the addresses of objects get moved around as they are garbage > collected. KSM sharing needs to be enabled "from the outside" for these type of > workloads. > > Use case 2: > The same interpreter can also be used for workloads where KSM brings no > benefit or even has overhead. We'd like to be able to enable KSM on a workload > by workload basis. > > Use case 3: > With the madvise call sharing opportunities are only enabled for the current > process: it is a workload-local decision. A considerable number of sharing > opportuniites may exist across multiple workloads or jobs. Only a higler level > entity like a job scheduler or container can know for certain if its running > one or more instances of a job. That job scheduler however doesn't have > the necessary internal worklaod knowledge to make targeted madvise calls. > > Security concerns: > In previous discussions security concerns have been brought up. The problem is > that an individual workload does not have the knowledge about what else is > running on a machine. Therefore it has to be very conservative in what memory > areas can be shared or not. However, if the system is dedicated to running > multiple jobs within the same security domain, its the job scheduler that has > the knowledge that sharing can be safely enabled and is even desirable. > > Performance: > Experiments with using UKSM have shown a capacity increase of around 20%. Stefan, can you do me a favor and investigate which pages we end up deduplicating -- especially if it's mostly only the zeropage and if it's still that significant when disabling THP? I'm currently investigating with some engineers on playing with enabling KSM on some selected processes (enabling it blindly on all VMAs of that process via madvise() ). One thing we noticed is that such (~50 times) 20MiB processes end up saving ~2MiB of memory per process. That made me suspicious, because it's the THP size. What I think happens is that we have a 2 MiB area (stack?) and only touch a single page. We get a whole 2 MiB THP populated. Most of that THP is zeroes. KSM somehow ends up splitting that THP and deduplicates all resulting zeropages. Thus, we "save" 2 MiB. Actually, it's more like we no longer "waste" 2 MiB. I think the processes with KSM have less (none) THP than the processes with THP enabled, but I only took a look at a sample of the process' smaps so far. I recall that there was a proposal to split underutilized THP and free up the zeropages (IIRC Rik was involved). I also recall that Mike reported memory waste due to THP.
On 03/15/23 21:03, David Hildenbrand wrote: > On 10.03.23 19:28, Stefan Roesch wrote: > > Stefan, can you do me a favor and investigate which pages we end up > deduplicating -- especially if it's mostly only the zeropage and if it's > still that significant when disabling THP? > > I'm currently investigating with some engineers on playing with enabling KSM > on some selected processes (enabling it blindly on all VMAs of that process > via madvise() ). > > One thing we noticed is that such (~50 times) 20MiB processes end up saving > ~2MiB of memory per process. That made me suspicious, because it's the THP > size. > > What I think happens is that we have a 2 MiB area (stack?) and only touch a > single page. We get a whole 2 MiB THP populated. Most of that THP is zeroes. > > KSM somehow ends up splitting that THP and deduplicates all resulting > zeropages. Thus, we "save" 2 MiB. Actually, it's more like we no longer > "waste" 2 MiB. I think the processes with KSM have less (none) THP than the > processes with THP enabled, but I only took a look at a sample of the > process' smaps so far. > > I recall that there was a proposal to split underutilized THP and free up > the zeropages (IIRC Rik was involved). > > I also recall that Mike reported memory waste due to THP. Interesting! 2MB stacks were also involved in our case. That stack would first get a write fault allocating a THP. The write fault would be followed by a mprotect(PROT_NONE) of the 4K page at the bottom of the stack to create a guard page. The mprotect would result in the THP being split resulting in 510 zero filled pages. I suppose KSM could dedup those zero pages.
On Wed, Mar 15, 2023 at 09:03:57PM +0100, David Hildenbrand wrote: > On 10.03.23 19:28, Stefan Roesch wrote: > > So far KSM can only be enabled by calling madvise for memory regions. To > > be able to use KSM for more workloads, KSM needs to have the ability to be > > enabled / disabled at the process / cgroup level. > > > > Use case 1: > > The madvise call is not available in the programming language. An example for > > this are programs with forked workloads using a garbage collected language without > > pointers. In such a language madvise cannot be made available. > > > > In addition the addresses of objects get moved around as they are garbage > > collected. KSM sharing needs to be enabled "from the outside" for these type of > > workloads. > > > > Use case 2: > > The same interpreter can also be used for workloads where KSM brings no > > benefit or even has overhead. We'd like to be able to enable KSM on a workload > > by workload basis. > > > > Use case 3: > > With the madvise call sharing opportunities are only enabled for the current > > process: it is a workload-local decision. A considerable number of sharing > > opportuniites may exist across multiple workloads or jobs. Only a higler level > > entity like a job scheduler or container can know for certain if its running > > one or more instances of a job. That job scheduler however doesn't have > > the necessary internal worklaod knowledge to make targeted madvise calls. > > > > Security concerns: > > In previous discussions security concerns have been brought up. The problem is > > that an individual workload does not have the knowledge about what else is > > running on a machine. Therefore it has to be very conservative in what memory > > areas can be shared or not. However, if the system is dedicated to running > > multiple jobs within the same security domain, its the job scheduler that has > > the knowledge that sharing can be safely enabled and is even desirable. > > > > Performance: > > Experiments with using UKSM have shown a capacity increase of around 20%. > > Stefan, can you do me a favor and investigate which pages we end up > deduplicating -- especially if it's mostly only the zeropage and if it's > still that significant when disabling THP? > > > I'm currently investigating with some engineers on playing with enabling KSM > on some selected processes (enabling it blindly on all VMAs of that process > via madvise() ). > > One thing we noticed is that such (~50 times) 20MiB processes end up saving > ~2MiB of memory per process. That made me suspicious, because it's the THP > size. > > What I think happens is that we have a 2 MiB area (stack?) and only touch a > single page. We get a whole 2 MiB THP populated. Most of that THP is zeroes. > > KSM somehow ends up splitting that THP and deduplicates all resulting > zeropages. Thus, we "save" 2 MiB. Actually, it's more like we no longer > "waste" 2 MiB. I think the processes with KSM have less (none) THP than the > processes with THP enabled, but I only took a look at a sample of the > process' smaps so far. THP and KSM is indeed an interesting problem. Better TLB hits with THPs, but reduced chance of deduplicating memory - which may or may not result in more IO that outweighs any THP benefits. That said, the service in the experiment referenced above has swap turned on and is under significant memory pressure. Unused splitpages would get swapped out. The difference from KSM was from deduplicating pages that were in active use, not internal THP fragmentation.
On Wed, Mar 15, 2023 at 05:05:47PM -0400, Johannes Weiner wrote: > On Wed, Mar 15, 2023 at 09:03:57PM +0100, David Hildenbrand wrote: > > On 10.03.23 19:28, Stefan Roesch wrote: > > > So far KSM can only be enabled by calling madvise for memory regions. To > > > be able to use KSM for more workloads, KSM needs to have the ability to be > > > enabled / disabled at the process / cgroup level. > > > > > > Use case 1: > > > The madvise call is not available in the programming language. An example for > > > this are programs with forked workloads using a garbage collected language without > > > pointers. In such a language madvise cannot be made available. > > > > > > In addition the addresses of objects get moved around as they are garbage > > > collected. KSM sharing needs to be enabled "from the outside" for these type of > > > workloads. > > > > > > Use case 2: > > > The same interpreter can also be used for workloads where KSM brings no > > > benefit or even has overhead. We'd like to be able to enable KSM on a workload > > > by workload basis. > > > > > > Use case 3: > > > With the madvise call sharing opportunities are only enabled for the current > > > process: it is a workload-local decision. A considerable number of sharing > > > opportuniites may exist across multiple workloads or jobs. Only a higler level > > > entity like a job scheduler or container can know for certain if its running > > > one or more instances of a job. That job scheduler however doesn't have > > > the necessary internal worklaod knowledge to make targeted madvise calls. > > > > > > Security concerns: > > > In previous discussions security concerns have been brought up. The problem is > > > that an individual workload does not have the knowledge about what else is > > > running on a machine. Therefore it has to be very conservative in what memory > > > areas can be shared or not. However, if the system is dedicated to running > > > multiple jobs within the same security domain, its the job scheduler that has > > > the knowledge that sharing can be safely enabled and is even desirable. > > > > > > Performance: > > > Experiments with using UKSM have shown a capacity increase of around 20%. > > > > Stefan, can you do me a favor and investigate which pages we end up > > deduplicating -- especially if it's mostly only the zeropage and if it's > > still that significant when disabling THP? > > > > > > I'm currently investigating with some engineers on playing with enabling KSM > > on some selected processes (enabling it blindly on all VMAs of that process > > via madvise() ). > > > > One thing we noticed is that such (~50 times) 20MiB processes end up saving > > ~2MiB of memory per process. That made me suspicious, because it's the THP > > size. > > > > What I think happens is that we have a 2 MiB area (stack?) and only touch a > > single page. We get a whole 2 MiB THP populated. Most of that THP is zeroes. > > > > KSM somehow ends up splitting that THP and deduplicates all resulting > > zeropages. Thus, we "save" 2 MiB. Actually, it's more like we no longer > > "waste" 2 MiB. I think the processes with KSM have less (none) THP than the > > processes with THP enabled, but I only took a look at a sample of the > > process' smaps so far. > > THP and KSM is indeed an interesting problem. Better TLB hits with > THPs, but reduced chance of deduplicating memory - which may or may > not result in more IO that outweighs any THP benefits. > > That said, the service in the experiment referenced above has swap > turned on and is under significant memory pressure. Unused splitpages > would get swapped out. The difference from KSM was from deduplicating > pages that were in active use, not internal THP fragmentation. Brainfart, my apologies. It could have been the ksm-induced splits themselves that allowed the unused subpages to get swapped out in the first place. But no, I double checked that workload just now. On a weekly average, it has about 50 anon THPs and 12 million regular anon. THP is not a factor in the reduction results.
On 15.03.23 22:19, Johannes Weiner wrote: > On Wed, Mar 15, 2023 at 05:05:47PM -0400, Johannes Weiner wrote: >> On Wed, Mar 15, 2023 at 09:03:57PM +0100, David Hildenbrand wrote: >>> On 10.03.23 19:28, Stefan Roesch wrote: >>>> So far KSM can only be enabled by calling madvise for memory regions. To >>>> be able to use KSM for more workloads, KSM needs to have the ability to be >>>> enabled / disabled at the process / cgroup level. >>>> >>>> Use case 1: >>>> The madvise call is not available in the programming language. An example for >>>> this are programs with forked workloads using a garbage collected language without >>>> pointers. In such a language madvise cannot be made available. >>>> >>>> In addition the addresses of objects get moved around as they are garbage >>>> collected. KSM sharing needs to be enabled "from the outside" for these type of >>>> workloads. >>>> >>>> Use case 2: >>>> The same interpreter can also be used for workloads where KSM brings no >>>> benefit or even has overhead. We'd like to be able to enable KSM on a workload >>>> by workload basis. >>>> >>>> Use case 3: >>>> With the madvise call sharing opportunities are only enabled for the current >>>> process: it is a workload-local decision. A considerable number of sharing >>>> opportuniites may exist across multiple workloads or jobs. Only a higler level >>>> entity like a job scheduler or container can know for certain if its running >>>> one or more instances of a job. That job scheduler however doesn't have >>>> the necessary internal worklaod knowledge to make targeted madvise calls. >>>> >>>> Security concerns: >>>> In previous discussions security concerns have been brought up. The problem is >>>> that an individual workload does not have the knowledge about what else is >>>> running on a machine. Therefore it has to be very conservative in what memory >>>> areas can be shared or not. However, if the system is dedicated to running >>>> multiple jobs within the same security domain, its the job scheduler that has >>>> the knowledge that sharing can be safely enabled and is even desirable. >>>> >>>> Performance: >>>> Experiments with using UKSM have shown a capacity increase of around 20%. >>> >>> Stefan, can you do me a favor and investigate which pages we end up >>> deduplicating -- especially if it's mostly only the zeropage and if it's >>> still that significant when disabling THP? >>> >>> >>> I'm currently investigating with some engineers on playing with enabling KSM >>> on some selected processes (enabling it blindly on all VMAs of that process >>> via madvise() ). >>> >>> One thing we noticed is that such (~50 times) 20MiB processes end up saving >>> ~2MiB of memory per process. That made me suspicious, because it's the THP >>> size. >>> >>> What I think happens is that we have a 2 MiB area (stack?) and only touch a >>> single page. We get a whole 2 MiB THP populated. Most of that THP is zeroes. >>> >>> KSM somehow ends up splitting that THP and deduplicates all resulting >>> zeropages. Thus, we "save" 2 MiB. Actually, it's more like we no longer >>> "waste" 2 MiB. I think the processes with KSM have less (none) THP than the >>> processes with THP enabled, but I only took a look at a sample of the >>> process' smaps so far. >> >> THP and KSM is indeed an interesting problem. Better TLB hits with >> THPs, but reduced chance of deduplicating memory - which may or may >> not result in more IO that outweighs any THP benefits. >> >> That said, the service in the experiment referenced above has swap >> turned on and is under significant memory pressure. Unused splitpages >> would get swapped out. The difference from KSM was from deduplicating >> pages that were in active use, not internal THP fragmentation. > > Brainfart, my apologies. It could have been the ksm-induced splits > themselves that allowed the unused subpages to get swapped out in the > first place. Yes, it's not easy to spot that this is implemented. I just wrote a simple reproducer to confirm: modifying a single subpage in a bunch of THP ranges will populate a THP whereby most of the THP is zeroes. As long as you keep accessing the single subpage via the PMD I assume chances of getting it swapped out are lower, because the folio will be references/dirty. KSM will come around and split the THP filled mostly with zeroes and deduplciate the resulting zero pages. [that's where a zeropage-only KSM could be very valuable eventually I think] > > But no, I double checked that workload just now. On a weekly average, > it has about 50 anon THPs and 12 million regular anon. THP is not a > factor in the reduction results. You mean with KSM enabled or with KSM disabled for the process? Not sure if your observation reliably implies that the scenario described couldn't have happened, but it's late in Germany already :) In any case, it would be nice to get a feeling for how much variety in these 20% of deduplicated pages are. For example, if it's 99% the same page or just a wild collection. Maybe "cat /sys/kernel/mm/ksm/pages_shared" would be expressive already. But I seem to be getting "126" in my simple example where only zeropages should get deduplicated, so I have to take another look at the stats tomorrow ...
On 15.03.23 22:45, David Hildenbrand wrote: > On 15.03.23 22:19, Johannes Weiner wrote: >> On Wed, Mar 15, 2023 at 05:05:47PM -0400, Johannes Weiner wrote: >>> On Wed, Mar 15, 2023 at 09:03:57PM +0100, David Hildenbrand wrote: >>>> On 10.03.23 19:28, Stefan Roesch wrote: >>>>> So far KSM can only be enabled by calling madvise for memory regions. To >>>>> be able to use KSM for more workloads, KSM needs to have the ability to be >>>>> enabled / disabled at the process / cgroup level. >>>>> >>>>> Use case 1: >>>>> The madvise call is not available in the programming language. An example for >>>>> this are programs with forked workloads using a garbage collected language without >>>>> pointers. In such a language madvise cannot be made available. >>>>> >>>>> In addition the addresses of objects get moved around as they are garbage >>>>> collected. KSM sharing needs to be enabled "from the outside" for these type of >>>>> workloads. >>>>> >>>>> Use case 2: >>>>> The same interpreter can also be used for workloads where KSM brings no >>>>> benefit or even has overhead. We'd like to be able to enable KSM on a workload >>>>> by workload basis. >>>>> >>>>> Use case 3: >>>>> With the madvise call sharing opportunities are only enabled for the current >>>>> process: it is a workload-local decision. A considerable number of sharing >>>>> opportuniites may exist across multiple workloads or jobs. Only a higler level >>>>> entity like a job scheduler or container can know for certain if its running >>>>> one or more instances of a job. That job scheduler however doesn't have >>>>> the necessary internal worklaod knowledge to make targeted madvise calls. >>>>> >>>>> Security concerns: >>>>> In previous discussions security concerns have been brought up. The problem is >>>>> that an individual workload does not have the knowledge about what else is >>>>> running on a machine. Therefore it has to be very conservative in what memory >>>>> areas can be shared or not. However, if the system is dedicated to running >>>>> multiple jobs within the same security domain, its the job scheduler that has >>>>> the knowledge that sharing can be safely enabled and is even desirable. >>>>> >>>>> Performance: >>>>> Experiments with using UKSM have shown a capacity increase of around 20%. >>>> >>>> Stefan, can you do me a favor and investigate which pages we end up >>>> deduplicating -- especially if it's mostly only the zeropage and if it's >>>> still that significant when disabling THP? >>>> >>>> >>>> I'm currently investigating with some engineers on playing with enabling KSM >>>> on some selected processes (enabling it blindly on all VMAs of that process >>>> via madvise() ). >>>> >>>> One thing we noticed is that such (~50 times) 20MiB processes end up saving >>>> ~2MiB of memory per process. That made me suspicious, because it's the THP >>>> size. >>>> >>>> What I think happens is that we have a 2 MiB area (stack?) and only touch a >>>> single page. We get a whole 2 MiB THP populated. Most of that THP is zeroes. >>>> >>>> KSM somehow ends up splitting that THP and deduplicates all resulting >>>> zeropages. Thus, we "save" 2 MiB. Actually, it's more like we no longer >>>> "waste" 2 MiB. I think the processes with KSM have less (none) THP than the >>>> processes with THP enabled, but I only took a look at a sample of the >>>> process' smaps so far. >>> >>> THP and KSM is indeed an interesting problem. Better TLB hits with >>> THPs, but reduced chance of deduplicating memory - which may or may >>> not result in more IO that outweighs any THP benefits. >>> >>> That said, the service in the experiment referenced above has swap >>> turned on and is under significant memory pressure. Unused splitpages >>> would get swapped out. The difference from KSM was from deduplicating >>> pages that were in active use, not internal THP fragmentation. >> >> Brainfart, my apologies. It could have been the ksm-induced splits >> themselves that allowed the unused subpages to get swapped out in the >> first place. > > Yes, it's not easy to spot that this is implemented. I just wrote a > simple reproducer to confirm: modifying a single subpage in a bunch of > THP ranges will populate a THP whereby most of the THP is zeroes. > > As long as you keep accessing the single subpage via the PMD I assume > chances of getting it swapped out are lower, because the folio will be > references/dirty. > > KSM will come around and split the THP filled mostly with zeroes and > deduplciate the resulting zero pages. > > [that's where a zeropage-only KSM could be very valuable eventually I think] > >> >> But no, I double checked that workload just now. On a weekly average, >> it has about 50 anon THPs and 12 million regular anon. THP is not a >> factor in the reduction results. > > You mean with KSM enabled or with KSM disabled for the process? Not sure > if your observation reliably implies that the scenario described > couldn't have happened, but it's late in Germany already :) > > In any case, it would be nice to get a feeling for how much variety in > these 20% of deduplicated pages are. For example, if it's 99% the same > page or just a wild collection. > > Maybe "cat /sys/kernel/mm/ksm/pages_shared" would be expressive already. > But I seem to be getting "126" in my simple example where only zeropages > should get deduplicated, so I have to take another look at the stats > tomorrow ... > On second thought, I guess it's because of "max_page_sharing". So one has to set that really high to make pages_shared more expressive.
On Fri, 10 Mar 2023 10:28:48 -0800 Stefan Roesch <shr@devkernel.io> wrote: > So far KSM can only be enabled by calling madvise for memory regions. To > be able to use KSM for more workloads, KSM needs to have the ability to be > enabled / disabled at the process / cgroup level. Review on this series has been a bit thin. Are we OK with moving this into mm-stable for the next merge window?
On 29.03.23 01:09, Andrew Morton wrote: > On Fri, 10 Mar 2023 10:28:48 -0800 Stefan Roesch <shr@devkernel.io> wrote: > >> So far KSM can only be enabled by calling madvise for memory regions. To >> be able to use KSM for more workloads, KSM needs to have the ability to be >> enabled / disabled at the process / cgroup level. > > Review on this series has been a bit thin. Are we OK with moving this > into mm-stable for the next merge window? I still want to review (traveling this week), but I also don't want to block this forever. I think I didn't get a reply from Stefan to my question [1] yet (only some comments from Johannes). I would still be interested in the variance of pages we end up de-duplicating for processes. The 20% statement in the cover letter is rather useless and possibly misleading if no details about the actual workload are shared. Maybe Hugh has some comments as well (most probably he's also busy). [1] https://lore.kernel.org/all/273a2f82-928f-5ad1-0988-1a886d169e83@redhat.com/
On Thu, Mar 30, 2023 at 06:55:31AM +0200, David Hildenbrand wrote: > On 29.03.23 01:09, Andrew Morton wrote: > > On Fri, 10 Mar 2023 10:28:48 -0800 Stefan Roesch <shr@devkernel.io> wrote: > > > > > So far KSM can only be enabled by calling madvise for memory regions. To > > > be able to use KSM for more workloads, KSM needs to have the ability to be > > > enabled / disabled at the process / cgroup level. > > > > Review on this series has been a bit thin. Are we OK with moving this > > into mm-stable for the next merge window? > > I still want to review (traveling this week), but I also don't want to block > this forever. > > I think I didn't get a reply from Stefan to my question [1] yet (only some > comments from Johannes). I would still be interested in the variance of > pages we end up de-duplicating for processes. > > The 20% statement in the cover letter is rather useless and possibly > misleading if no details about the actual workload are shared. The workload is instagram. It forks off Django runtimes on-demand until it saturates whatever hardware it's running on. This benefits from merging common heap/stack state between instances. Since that runtime is quite large, the 20% number is not surprising, and matches our expectations of duplicative memory between instances. Obviously we could spend months analysing which exact allocations are identical, and then more months or years reworking the architecture to deduplicate them by hand and in userspace. But this isn't practical, and KSM is specifically for cases where this isn't practical. Based on your request in the previous thread, we investigated whether the boost was coming from the unintended side effects of KSM splitting THPs. This wasn't the case. If you have other theories on how the results could be bogus, we'd be happy to investigate those as well. But you have to let us know what you're looking for. Beyond that, I don't think we need to prove from scratch that KSM can be a worthwhile optimization. It's been established that it can be. This series is about enabling it in scenarios where madvise() isn't practical, that's it, and it's yielding the expected results.
On 30.03.23 16:26, Johannes Weiner wrote: > On Thu, Mar 30, 2023 at 06:55:31AM +0200, David Hildenbrand wrote: >> On 29.03.23 01:09, Andrew Morton wrote: >>> On Fri, 10 Mar 2023 10:28:48 -0800 Stefan Roesch <shr@devkernel.io> wrote: >>> >>>> So far KSM can only be enabled by calling madvise for memory regions. To >>>> be able to use KSM for more workloads, KSM needs to have the ability to be >>>> enabled / disabled at the process / cgroup level. >>> >>> Review on this series has been a bit thin. Are we OK with moving this >>> into mm-stable for the next merge window? >> >> I still want to review (traveling this week), but I also don't want to block >> this forever. >> >> I think I didn't get a reply from Stefan to my question [1] yet (only some >> comments from Johannes). I would still be interested in the variance of >> pages we end up de-duplicating for processes. >> >> The 20% statement in the cover letter is rather useless and possibly >> misleading if no details about the actual workload are shared. > > The workload is instagram. It forks off Django runtimes on-demand > until it saturates whatever hardware it's running on. This benefits > from merging common heap/stack state between instances. Since that > runtime is quite large, the 20% number is not surprising, and matches > our expectations of duplicative memory between instances. Thanks for this explanation. It's valuable to get at least a feeling for the workload because it doesn't seem to apply to other workloads at all. > > Obviously we could spend months analysing which exact allocations are > identical, and then more months or years reworking the architecture to > deduplicate them by hand and in userspace. But this isn't practical, > and KSM is specifically for cases where this isn't practical. > > Based on your request in the previous thread, we investigated whether > the boost was coming from the unintended side effects of KSM splitting > THPs. This wasn't the case. > > If you have other theories on how the results could be bogus, we'd be > happy to investigate those as well. But you have to let us know what > you're looking for. > Maybe I'm bad at making such requests but "Stefan, can you do me a favor and investigate which pages we end up deduplicating -- especially if it's mostly only the zeropage and if it's still that significant when disabling THP?" "In any case, it would be nice to get a feeling for how much variety in these 20% of deduplicated pages are. " is pretty clear to me. And shouldn't take months. > Beyond that, I don't think we need to prove from scratch that KSM can I never expected a proof. I was merely trying to understand if it's really KSM that helps here. Also with the intention to figure out if KSM is really the right tool to use here or if it simply "helps by luck" as with the shared zeropage. That end result could have been valuable to your use case as well, because KSM overhead is real. > be a worthwhile optimization. It's been established that it can > be. This series is about enabling it in scenarios where madvise() > isn't practical, that's it, and it's yielding the expected results. I'm sorry to say, but you sound a bit aggressive and annoyed. I also have no idea why Stefan isn't replying to me but always you. Am I asking the wrong questions? Do you want me to stop looking at KSM code?
David Hildenbrand <david@redhat.com> writes: > On 15.03.23 22:19, Johannes Weiner wrote: >> On Wed, Mar 15, 2023 at 05:05:47PM -0400, Johannes Weiner wrote: >>> On Wed, Mar 15, 2023 at 09:03:57PM +0100, David Hildenbrand wrote: >>>> On 10.03.23 19:28, Stefan Roesch wrote: >>>>> So far KSM can only be enabled by calling madvise for memory regions. To >>>>> be able to use KSM for more workloads, KSM needs to have the ability to be >>>>> enabled / disabled at the process / cgroup level. >>>>> >>>>> Use case 1: >>>>> The madvise call is not available in the programming language. An example for >>>>> this are programs with forked workloads using a garbage collected language without >>>>> pointers. In such a language madvise cannot be made available. >>>>> >>>>> In addition the addresses of objects get moved around as they are garbage >>>>> collected. KSM sharing needs to be enabled "from the outside" for these type of >>>>> workloads. >>>>> >>>>> Use case 2: >>>>> The same interpreter can also be used for workloads where KSM brings no >>>>> benefit or even has overhead. We'd like to be able to enable KSM on a workload >>>>> by workload basis. >>>>> >>>>> Use case 3: >>>>> With the madvise call sharing opportunities are only enabled for the current >>>>> process: it is a workload-local decision. A considerable number of sharing >>>>> opportuniites may exist across multiple workloads or jobs. Only a higler level >>>>> entity like a job scheduler or container can know for certain if its running >>>>> one or more instances of a job. That job scheduler however doesn't have >>>>> the necessary internal worklaod knowledge to make targeted madvise calls. >>>>> >>>>> Security concerns: >>>>> In previous discussions security concerns have been brought up. The problem is >>>>> that an individual workload does not have the knowledge about what else is >>>>> running on a machine. Therefore it has to be very conservative in what memory >>>>> areas can be shared or not. However, if the system is dedicated to running >>>>> multiple jobs within the same security domain, its the job scheduler that has >>>>> the knowledge that sharing can be safely enabled and is even desirable. >>>>> >>>>> Performance: >>>>> Experiments with using UKSM have shown a capacity increase of around 20%. >>>> >>>> Stefan, can you do me a favor and investigate which pages we end up >>>> deduplicating -- especially if it's mostly only the zeropage and if it's >>>> still that significant when disabling THP? >>>> >>>> >>>> I'm currently investigating with some engineers on playing with enabling KSM >>>> on some selected processes (enabling it blindly on all VMAs of that process >>>> via madvise() ). >>>> >>>> One thing we noticed is that such (~50 times) 20MiB processes end up saving >>>> ~2MiB of memory per process. That made me suspicious, because it's the THP >>>> size. >>>> >>>> What I think happens is that we have a 2 MiB area (stack?) and only touch a >>>> single page. We get a whole 2 MiB THP populated. Most of that THP is zeroes. >>>> >>>> KSM somehow ends up splitting that THP and deduplicates all resulting >>>> zeropages. Thus, we "save" 2 MiB. Actually, it's more like we no longer >>>> "waste" 2 MiB. I think the processes with KSM have less (none) THP than the >>>> processes with THP enabled, but I only took a look at a sample of the >>>> process' smaps so far. >>> >>> THP and KSM is indeed an interesting problem. Better TLB hits with >>> THPs, but reduced chance of deduplicating memory - which may or may >>> not result in more IO that outweighs any THP benefits. >>> >>> That said, the service in the experiment referenced above has swap >>> turned on and is under significant memory pressure. Unused splitpages >>> would get swapped out. The difference from KSM was from deduplicating >>> pages that were in active use, not internal THP fragmentation. >> Brainfart, my apologies. It could have been the ksm-induced splits >> themselves that allowed the unused subpages to get swapped out in the >> first place. > > Yes, it's not easy to spot that this is implemented. I just wrote a simple > reproducer to confirm: modifying a single subpage in a bunch of THP ranges will > populate a THP whereby most of the THP is zeroes. > > As long as you keep accessing the single subpage via the PMD I assume chances of > getting it swapped out are lower, because the folio will be references/dirty. > > KSM will come around and split the THP filled mostly with zeroes and deduplciate > the resulting zero pages. > > [that's where a zeropage-only KSM could be very valuable eventually I think] > We can certainly run an experiment where THP is turned off to verify if we observe similar savings, >> But no, I double checked that workload just now. On a weekly average, >> it has about 50 anon THPs and 12 million regular anon. THP is not a >> factor in the reduction results. > > You mean with KSM enabled or with KSM disabled for the process? Not sure if your > observation reliably implies that the scenario described couldn't have happened, > but it's late in Germany already :) > > In any case, it would be nice to get a feeling for how much variety in these 20% > of deduplicated pages are. For example, if it's 99% the same page or just a wild > collection. > > Maybe "cat /sys/kernel/mm/ksm/pages_shared" would be expressive already. But I > seem to be getting "126" in my simple example where only zeropages should get > deduplicated, so I have to take another look at the stats tomorrow ... /sys/kernel/mm/ksm/pages_shared is over 10000 when we run this on an Instagram workload. The workload consists of 36 processes plus a few sidecar processes. Also to give some idea for individual VMA's 7ef5d5600000-7ef5e5600000 rw-p 00000000 00:00 0 (Size: 262144 KB, KSM: 73160 KB)
My mistake I first answered to an older email. David Hildenbrand <david@redhat.com> writes: > On 30.03.23 16:26, Johannes Weiner wrote: >> On Thu, Mar 30, 2023 at 06:55:31AM +0200, David Hildenbrand wrote: >>> On 29.03.23 01:09, Andrew Morton wrote: >>>> On Fri, 10 Mar 2023 10:28:48 -0800 Stefan Roesch <shr@devkernel.io> wrote: >>>> >>>>> So far KSM can only be enabled by calling madvise for memory regions. To >>>>> be able to use KSM for more workloads, KSM needs to have the ability to be >>>>> enabled / disabled at the process / cgroup level. >>>> >>>> Review on this series has been a bit thin. Are we OK with moving this >>>> into mm-stable for the next merge window? >>> >>> I still want to review (traveling this week), but I also don't want to block >>> this forever. >>> >>> I think I didn't get a reply from Stefan to my question [1] yet (only some >>> comments from Johannes). I would still be interested in the variance of >>> pages we end up de-duplicating for processes. >>> >>> The 20% statement in the cover letter is rather useless and possibly >>> misleading if no details about the actual workload are shared. >> The workload is instagram. It forks off Django runtimes on-demand >> until it saturates whatever hardware it's running on. This benefits >> from merging common heap/stack state between instances. Since that >> runtime is quite large, the 20% number is not surprising, and matches >> our expectations of duplicative memory between instances. > > Thanks for this explanation. It's valuable to get at least a feeling for the > workload because it doesn't seem to apply to other workloads at all. > >> Obviously we could spend months analysing which exact allocations are >> identical, and then more months or years reworking the architecture to >> deduplicate them by hand and in userspace. But this isn't practical, >> and KSM is specifically for cases where this isn't practical. >> Based on your request in the previous thread, we investigated whether >> the boost was coming from the unintended side effects of KSM splitting >> THPs. This wasn't the case. >> If you have other theories on how the results could be bogus, we'd be >> happy to investigate those as well. But you have to let us know what >> you're looking for. >> > > Maybe I'm bad at making such requests but > > "Stefan, can you do me a favor and investigate which pages we end up > deduplicating -- especially if it's mostly only the zeropage and if it's > still that significant when disabling THP?" > > "In any case, it would be nice to get a feeling for how much variety in > these 20% of deduplicated pages are. " > > is pretty clear to me. And shouldn't take months. > /sys/kernel/mm/ksm/pages_shared is over 10000 when we run this on an Instagram workload. The workload consists of 36 processes plus a few sidecar processes. Each of these individual processes has around 500MB in KSM pages. Also to give some idea for individual VMA's 7ef5d5600000-7ef5e5600000 rw-p 00000000 00:00 0 (Size: 262144 KB, KSM: 73160 KB) >> Beyond that, I don't think we need to prove from scratch that KSM can > > I never expected a proof. I was merely trying to understand if it's really KSM > that helps here. Also with the intention to figure out if KSM is really the > right tool to use here or if it simply "helps by luck" as with the shared > zeropage. That end result could have been valuable to your use case as well, > because KSM overhead is real. > >> be a worthwhile optimization. It's been established that it can >> be. This series is about enabling it in scenarios where madvise() >> isn't practical, that's it, and it's yielding the expected results. > > I'm sorry to say, but you sound a bit aggressive and annoyed. I also have no > idea why Stefan isn't replying to me but always you. > > Am I asking the wrong questions? Do you want me to stop looking at KSM code? > Your review is valuable, Johannes was quicker than me.
On Thu, 30 Mar 2023 06:55:31 +0200 David Hildenbrand <david@redhat.com> wrote: > On 29.03.23 01:09, Andrew Morton wrote: > > On Fri, 10 Mar 2023 10:28:48 -0800 Stefan Roesch <shr@devkernel.io> wrote: > > > >> So far KSM can only be enabled by calling madvise for memory regions. To > >> be able to use KSM for more workloads, KSM needs to have the ability to be > >> enabled / disabled at the process / cgroup level. > > > > Review on this series has been a bit thin. Are we OK with moving this > > into mm-stable for the next merge window? > > I still want to review (traveling this week), but I also don't want to > block this forever. No hurry, we're only at -rc4. Holding this series in mm-unstable for another 2-3 weeks isn't a problem.
>>> Obviously we could spend months analysing which exact allocations are >>> identical, and then more months or years reworking the architecture to >>> deduplicate them by hand and in userspace. But this isn't practical, >>> and KSM is specifically for cases where this isn't practical. >>> Based on your request in the previous thread, we investigated whether >>> the boost was coming from the unintended side effects of KSM splitting >>> THPs. This wasn't the case. >>> If you have other theories on how the results could be bogus, we'd be >>> happy to investigate those as well. But you have to let us know what >>> you're looking for. >>> >> >> Maybe I'm bad at making such requests but >> >> "Stefan, can you do me a favor and investigate which pages we end up >> deduplicating -- especially if it's mostly only the zeropage and if it's >> still that significant when disabling THP?" >> >> "In any case, it would be nice to get a feeling for how much variety in >> these 20% of deduplicated pages are. " >> >> is pretty clear to me. And shouldn't take months. >> Just to clarify: the details I requested are not meant to decide whether to reject the patch set (I understand that it can be beneficial to have); I primarily want to understand if we're really dealing with a workload where KSM is able to deduplicate pages that are non-trivial, to maybe figure out if there are other workloads that could similarly benefit -- or if we could optimize KSM for these specific cases or avoid the memory deduplication altogether. In contrast to e.g.: 1) THP resulted in many zeropages we end up deduplicating again. The THP placement was unfortunate. 2) Unoptimized memory allocators that leave many identical pages mapped after freeing up memory (e.g., zeroed pages, pages all filled with poison values) instead of e.g., using MADV_DONTNEED to free up that memory. > > /sys/kernel/mm/ksm/pages_shared is over 10000 when we run this on an > Instagram workload. The workload consists of 36 processes plus a few > sidecar processes. Thanks! To which value is /sys/kernel/mm/ksm/max_page_sharing set in that environment? What would be interesting is pages_shared after max_page_sharing was set to a very high number such that pages_shared does not include duplicates. Then pages_shared actually expresses how many different pages we deduplicate. No need to run without THP in that case. Similarly, enabling "use_zero_pages" could highlight if your workload ends up deduplciating a lot of zeropages. But maxing out max_page_sharing would be sufficient to understand what's happening. > > Each of these individual processes has around 500MB in KSM pages. > That's really a lot, thanks. > Also to give some idea for individual VMA's > > 7ef5d5600000-7ef5e5600000 rw-p 00000000 00:00 0 (Size: 262144 KB, KSM: > 73160 KB) > I'll have a look at the patches today.
David Hildenbrand <david@redhat.com> writes: >>>> Obviously we could spend months analysing which exact allocations are >>>> identical, and then more months or years reworking the architecture to >>>> deduplicate them by hand and in userspace. But this isn't practical, >>>> and KSM is specifically for cases where this isn't practical. >>>> Based on your request in the previous thread, we investigated whether >>>> the boost was coming from the unintended side effects of KSM splitting >>>> THPs. This wasn't the case. >>>> If you have other theories on how the results could be bogus, we'd be >>>> happy to investigate those as well. But you have to let us know what >>>> you're looking for. >>>> >>> >>> Maybe I'm bad at making such requests but >>> >>> "Stefan, can you do me a favor and investigate which pages we end up >>> deduplicating -- especially if it's mostly only the zeropage and if it's >>> still that significant when disabling THP?" >>> >>> "In any case, it would be nice to get a feeling for how much variety in >>> these 20% of deduplicated pages are. " >>> >>> is pretty clear to me. And shouldn't take months. >>> > > Just to clarify: the details I requested are not meant to decide whether to > reject the patch set (I understand that it can be beneficial to have); I > primarily want to understand if we're really dealing with a workload where KSM > is able to deduplicate pages that are non-trivial, to maybe figure out if there > are other workloads that could similarly benefit -- or if we could optimize KSM > for these specific cases or avoid the memory deduplication altogether. > > In contrast to e.g.: > > 1) THP resulted in many zeropages we end up deduplicating again. The THP > placement was unfortunate. > > 2) Unoptimized memory allocators that leave many identical pages mapped > after freeing up memory (e.g., zeroed pages, pages all filled with > poison values) instead of e.g., using MADV_DONTNEED to free up that > memory. > > I repeated an experiment with and without KSM. In terms of THP there is no huge difference between the two. On a 64GB main memory machine I see between 100 - 400MB in AnonHugePages. >> /sys/kernel/mm/ksm/pages_shared is over 10000 when we run this on an >> Instagram workload. The workload consists of 36 processes plus a few >> sidecar processes. > > Thanks! To which value is /sys/kernel/mm/ksm/max_page_sharing set in that > environment? > It's set to the standard value of 256. In the meantime I have run experiments with different settings for pages_to_scan. With the default value of 100, we only get a relatively small benefit of KSM. If I increase the value to for instance to 2000 or 3000 the savings are substantial. (The workload is memory bound, not CPU bound). Here are some stats for setting pages_to_scan to 3000: full_scans: 560 general_profit: 20620539008 max_page_sharing: 256 merge_across_nodes: 1 pages_shared: 125446 pages_sharing: 5259506 pages_to_scan: 3000 pages_unshared: 1897537 pages_volatile: 12389223 run: 1 sleep_millisecs: 20 stable_node_chains: 176 stable_node_chains_prune_millisecs: 2000 stable_node_dups: 2604 use_zero_pages: 0 zero_pages_sharing: 0 > What would be interesting is pages_shared after max_page_sharing was set to a > very high number such that pages_shared does not include duplicates. Then > pages_shared actually expresses how many different pages we deduplicate. No need > to run without THP in that case. > Thats on my list for the next set of experiments. > Similarly, enabling "use_zero_pages" could highlight if your workload ends up > deduplciating a lot of zeropages. But maxing out max_page_sharing would be > sufficient to understand what's happening. > > I already run experiments with use_zero_pages, but they didn't make a difference. I'll repeat the experiment with a higher pages_to_scan value. >> Each of these individual processes has around 500MB in KSM pages. >> > > That's really a lot, thanks. > >> Also to give some idea for individual VMA's >> 7ef5d5600000-7ef5e5600000 rw-p 00000000 00:00 0 (Size: 262144 KB, KSM: >> 73160 KB) >> > > I'll have a look at the patches today.
On 03.04.23 18:34, Stefan Roesch wrote: >> >> In contrast to e.g.: >> >> 1) THP resulted in many zeropages we end up deduplicating again. The THP >> placement was unfortunate. >> >> 2) Unoptimized memory allocators that leave many identical pages mapped >> after freeing up memory (e.g., zeroed pages, pages all filled with >> poison values) instead of e.g., using MADV_DONTNEED to free up that >> memory. >> >> > > I repeated an experiment with and without KSM. In terms of THP there is > no huge difference between the two. On a 64GB main memory machine I see > between 100 - 400MB in AnonHugePages. > >>> /sys/kernel/mm/ksm/pages_shared is over 10000 when we run this on an >>> Instagram workload. The workload consists of 36 processes plus a few >>> sidecar processes. >> >> Thanks! To which value is /sys/kernel/mm/ksm/max_page_sharing set in that >> environment? >> > > It's set to the standard value of 256. > > In the meantime I have run experiments with different settings for > pages_to_scan. With the default value of 100, we only get a relatively > small benefit of KSM. If I increase the value to for instance to 2000 or > 3000 the savings are substantial. (The workload is memory bound, not > CPU bound). Interesting. > > Here are some stats for setting pages_to_scan to 3000: > > full_scans: 560 > general_profit: 20620539008 > max_page_sharing: 256 > merge_across_nodes: 1 > pages_shared: 125446 > pages_sharing: 5259506 > pages_to_scan: 3000 > pages_unshared: 1897537 > pages_volatile: 12389223 > run: 1 > sleep_millisecs: 20 > stable_node_chains: 176 > stable_node_chains_prune_millisecs: 2000 > stable_node_dups: 2604 > use_zero_pages: 0 > zero_pages_sharing: 0 > > >> What would be interesting is pages_shared after max_page_sharing was set to a >> very high number such that pages_shared does not include duplicates. Then >> pages_shared actually expresses how many different pages we deduplicate. No need >> to run without THP in that case. >> > > Thats on my list for the next set of experiments. Splendid. >> Similarly, enabling "use_zero_pages" could highlight if your workload ends up >> deduplciating a lot of zeropages. But maxing out max_page_sharing would be >> sufficient to understand what's happening. >> >> > > I already run experiments with use_zero_pages, but they didn't make a > difference. I'll repeat the experiment with a higher pages_to_scan > value. Okay, so it's most certainly not the zeropage. Thanks for that information and running the experiments!
Stefan Roesch <shr@devkernel.io> writes: > David Hildenbrand <david@redhat.com> writes: > >>>>> Obviously we could spend months analysing which exact allocations are >>>>> identical, and then more months or years reworking the architecture to >>>>> deduplicate them by hand and in userspace. But this isn't practical, >>>>> and KSM is specifically for cases where this isn't practical. >>>>> Based on your request in the previous thread, we investigated whether >>>>> the boost was coming from the unintended side effects of KSM splitting >>>>> THPs. This wasn't the case. >>>>> If you have other theories on how the results could be bogus, we'd be >>>>> happy to investigate those as well. But you have to let us know what >>>>> you're looking for. >>>>> >>>> >>>> Maybe I'm bad at making such requests but >>>> >>>> "Stefan, can you do me a favor and investigate which pages we end up >>>> deduplicating -- especially if it's mostly only the zeropage and if it's >>>> still that significant when disabling THP?" >>>> >>>> "In any case, it would be nice to get a feeling for how much variety in >>>> these 20% of deduplicated pages are. " >>>> >>>> is pretty clear to me. And shouldn't take months. >>>> >> >> Just to clarify: the details I requested are not meant to decide whether to >> reject the patch set (I understand that it can be beneficial to have); I >> primarily want to understand if we're really dealing with a workload where KSM >> is able to deduplicate pages that are non-trivial, to maybe figure out if there >> are other workloads that could similarly benefit -- or if we could optimize KSM >> for these specific cases or avoid the memory deduplication altogether. >> >> In contrast to e.g.: >> >> 1) THP resulted in many zeropages we end up deduplicating again. The THP >> placement was unfortunate. >> >> 2) Unoptimized memory allocators that leave many identical pages mapped >> after freeing up memory (e.g., zeroed pages, pages all filled with >> poison values) instead of e.g., using MADV_DONTNEED to free up that >> memory. >> >> > > I repeated an experiment with and without KSM. In terms of THP there is > no huge difference between the two. On a 64GB main memory machine I see > between 100 - 400MB in AnonHugePages. > >>> /sys/kernel/mm/ksm/pages_shared is over 10000 when we run this on an >>> Instagram workload. The workload consists of 36 processes plus a few >>> sidecar processes. >> >> Thanks! To which value is /sys/kernel/mm/ksm/max_page_sharing set in that >> environment? >> > > It's set to the standard value of 256. > > In the meantime I have run experiments with different settings for > pages_to_scan. With the default value of 100, we only get a relatively > small benefit of KSM. If I increase the value to for instance to 2000 or > 3000 the savings are substantial. (The workload is memory bound, not > CPU bound). > > Here are some stats for setting pages_to_scan to 3000: > > full_scans: 560 > general_profit: 20620539008 > max_page_sharing: 256 > merge_across_nodes: 1 > pages_shared: 125446 > pages_sharing: 5259506 > pages_to_scan: 3000 > pages_unshared: 1897537 > pages_volatile: 12389223 > run: 1 > sleep_millisecs: 20 > stable_node_chains: 176 > stable_node_chains_prune_millisecs: 2000 > stable_node_dups: 2604 > use_zero_pages: 0 > zero_pages_sharing: 0 > > >> What would be interesting is pages_shared after max_page_sharing was set to a >> very high number such that pages_shared does not include duplicates. Then >> pages_shared actually expresses how many different pages we deduplicate. No need >> to run without THP in that case. >> > > Thats on my list for the next set of experiments. > In the new experiment I increased the max_page_sharing value to 16384. This reduced the number of stable_node_dups considerably (its around 3% of the previous value). However pages_sharing is still very high for this workload. full_scans: 138 general_profit: 24442268608 max_page_sharing: 16384 merge_across_nodes: 1 pages_shared: 144590 pages_sharing: 6230983 pages_to_scan: 3000 pages_unshared: 2120307 pages_volatile: 14590780 run: 1 sleep_millisecs: 20 stable_node_chains: 23 stable_node_chains_prune_millisecs: 2000 stable_node_dups: 78 use_zero_pages: 0 zero_pages_sharing: 0 >> Similarly, enabling "use_zero_pages" could highlight if your workload ends up >> deduplciating a lot of zeropages. But maxing out max_page_sharing would be >> sufficient to understand what's happening. >> >> > > I already run experiments with use_zero_pages, but they didn't make a > difference. I'll repeat the experiment with a higher pages_to_scan > value. > >>> Each of these individual processes has around 500MB in KSM pages. >>> >> >> That's really a lot, thanks. >> >>> Also to give some idea for individual VMA's >>> 7ef5d5600000-7ef5e5600000 rw-p 00000000 00:00 0 (Size: 262144 KB, KSM: >>> 73160 KB) >>> >> >> I'll have a look at the patches today.
On 06.04.23 18:59, Stefan Roesch wrote: > > Stefan Roesch <shr@devkernel.io> writes: > >> David Hildenbrand <david@redhat.com> writes: >> >>>>>> Obviously we could spend months analysing which exact allocations are >>>>>> identical, and then more months or years reworking the architecture to >>>>>> deduplicate them by hand and in userspace. But this isn't practical, >>>>>> and KSM is specifically for cases where this isn't practical. >>>>>> Based on your request in the previous thread, we investigated whether >>>>>> the boost was coming from the unintended side effects of KSM splitting >>>>>> THPs. This wasn't the case. >>>>>> If you have other theories on how the results could be bogus, we'd be >>>>>> happy to investigate those as well. But you have to let us know what >>>>>> you're looking for. >>>>>> >>>>> >>>>> Maybe I'm bad at making such requests but >>>>> >>>>> "Stefan, can you do me a favor and investigate which pages we end up >>>>> deduplicating -- especially if it's mostly only the zeropage and if it's >>>>> still that significant when disabling THP?" >>>>> >>>>> "In any case, it would be nice to get a feeling for how much variety in >>>>> these 20% of deduplicated pages are. " >>>>> >>>>> is pretty clear to me. And shouldn't take months. >>>>> >>> >>> Just to clarify: the details I requested are not meant to decide whether to >>> reject the patch set (I understand that it can be beneficial to have); I >>> primarily want to understand if we're really dealing with a workload where KSM >>> is able to deduplicate pages that are non-trivial, to maybe figure out if there >>> are other workloads that could similarly benefit -- or if we could optimize KSM >>> for these specific cases or avoid the memory deduplication altogether. >>> >>> In contrast to e.g.: >>> >>> 1) THP resulted in many zeropages we end up deduplicating again. The THP >>> placement was unfortunate. >>> >>> 2) Unoptimized memory allocators that leave many identical pages mapped >>> after freeing up memory (e.g., zeroed pages, pages all filled with >>> poison values) instead of e.g., using MADV_DONTNEED to free up that >>> memory. >>> >>> >> >> I repeated an experiment with and without KSM. In terms of THP there is >> no huge difference between the two. On a 64GB main memory machine I see >> between 100 - 400MB in AnonHugePages. >> >>>> /sys/kernel/mm/ksm/pages_shared is over 10000 when we run this on an >>>> Instagram workload. The workload consists of 36 processes plus a few >>>> sidecar processes. >>> >>> Thanks! To which value is /sys/kernel/mm/ksm/max_page_sharing set in that >>> environment? >>> >> >> It's set to the standard value of 256. >> >> In the meantime I have run experiments with different settings for >> pages_to_scan. With the default value of 100, we only get a relatively >> small benefit of KSM. If I increase the value to for instance to 2000 or >> 3000 the savings are substantial. (The workload is memory bound, not >> CPU bound). >> >> Here are some stats for setting pages_to_scan to 3000: >> >> full_scans: 560 >> general_profit: 20620539008 >> max_page_sharing: 256 >> merge_across_nodes: 1 >> pages_shared: 125446 >> pages_sharing: 5259506 >> pages_to_scan: 3000 >> pages_unshared: 1897537 >> pages_volatile: 12389223 >> run: 1 >> sleep_millisecs: 20 >> stable_node_chains: 176 >> stable_node_chains_prune_millisecs: 2000 >> stable_node_dups: 2604 >> use_zero_pages: 0 >> zero_pages_sharing: 0 >> >> >>> What would be interesting is pages_shared after max_page_sharing was set to a >>> very high number such that pages_shared does not include duplicates. Then >>> pages_shared actually expresses how many different pages we deduplicate. No need >>> to run without THP in that case. >>> >> >> Thats on my list for the next set of experiments. >> > > In the new experiment I increased the max_page_sharing value to 16384. > This reduced the number of stable_node_dups considerably (its around 3% > of the previous value). However pages_sharing is still very high for > this workload. > > full_scans: 138 > general_profit: 24442268608 > max_page_sharing: 16384 > merge_across_nodes: 1 > pages_shared: 144590 > pages_sharing: 6230983 > pages_to_scan: 3000 > pages_unshared: 2120307 > pages_volatile: 14590780 > run: 1 > sleep_millisecs: 20 > stable_node_chains: 23 > stable_node_chains_prune_millisecs: 2000 > stable_node_dups: 78 > use_zero_pages: 0 > zero_pages_sharing: 0 Interesting, thanks! I wonder if it's really many interpreters performing (and caching?) essentially same blobs (for example, for a JIT the IR and/or target executable code). So maybe in general, such multi-instance interpreters are a good candidate for KSM. (I recall there were some processes where a server would perform and cache the translations instead) But just a pure speculation :)