Message ID | 20230310182851.2579138-2-shr@devkernel.io (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm: process/cgroup ksm support | expand |
On Fri, Mar 10, 2023 at 10:28:49AM -0800, Stefan Roesch wrote: > @@ -534,16 +534,58 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr, > return (ret & VM_FAULT_OOM) ? -ENOMEM : 0; > } > > +static bool vma_ksm_compatible(struct vm_area_struct *vma) > +{ > + /* > + * Be somewhat over-protective for now! > + */ > + if (vma->vm_flags & (VM_MERGEABLE | VM_SHARED | VM_MAYSHARE | > + VM_PFNMAP | VM_IO | VM_DONTEXPAND | > + VM_HUGETLB | VM_MIXEDMAP)) > + return false; /* just ignore the advice */ > + > + if (vma_is_dax(vma)) > + return false; > + > +#ifdef VM_SAO > + if (*vm_flags & VM_SAO) > + return false; > +#endif > +#ifdef VM_SPARC_ADI > + if (*vm_flags & VM_SPARC_ADI) > + return false; > +#endif These two also need to check vma->vm_flags, or it won't build on those configs. Otherwise, the patch looks good to me.
On 10.03.23 19:28, Stefan Roesch wrote: > Patch series "mm: process/cgroup ksm support", v3. > > So far KSM can only be enabled by calling madvise for memory regions. To > be able to use KSM for more workloads, KSM needs to have the ability to be > enabled / disabled at the process / cgroup level. > > Use case 1: > > The madvise call is not available in the programming language. An > example for this are programs with forked workloads using a garbage > collected language without pointers. In such a language madvise cannot > be made available. > > In addition the addresses of objects get moved around as they are > garbage collected. KSM sharing needs to be enabled "from the outside" > for these type of workloads. I guess the interpreter could enable it (like a memory allocator could enable it for the whole heap). But I get that it's much easier to enable this per-process, and eventually only when a lot of the same processes are running in that particular environment. > > Use case 2: > > The same interpreter can also be used for workloads where KSM brings > no benefit or even has overhead. We'd like to be able to enable KSM on > a workload by workload basis. Agreed. A per-process control is also helpful to identidy workloads where KSM might be beneficial (and to which degree). > > Use case 3: > > With the madvise call sharing opportunities are only enabled for the > current process: it is a workload-local decision. A considerable number > of sharing opportuniites may exist across multiple workloads or jobs. > Only a higler level entity like a job scheduler or container can know > for certain if its running one or more instances of a job. That job > scheduler however doesn't have the necessary internal worklaod knowledge > to make targeted madvise calls. > > Security concerns: > > In previous discussions security concerns have been brought up. The > problem is that an individual workload does not have the knowledge about > what else is running on a machine. Therefore it has to be very > conservative in what memory areas can be shared or not. However, if the > system is dedicated to running multiple jobs within the same security > domain, its the job scheduler that has the knowledge that sharing can be > safely enabled and is even desirable. > > Performance: > > Experiments with using UKSM have shown a capacity increase of around > 20%. > As raised, it would be great to include more details about the workload where this particulalry helps (e.g., a lot of Django processes operating in the same domain). > > 1. New options for prctl system command > > This patch series adds two new options to the prctl system call. > The first one allows to enable KSM at the process level and the second > one to query the setting. > > The setting will be inherited by child processes. > > With the above setting, KSM can be enabled for the seed process of a > cgroup and all processes in the cgroup will inherit the setting. > > 2. Changes to KSM processing > > When KSM is enabled at the process level, the KSM code will iterate > over all the VMA's and enable KSM for the eligible VMA's. > > When forking a process that has KSM enabled, the setting will be > inherited by the new child process. > > In addition when KSM is disabled for a process, KSM will be disabled > for the VMA's where KSM has been enabled. Do we want to make MADV_MERGEABLE/MADV_UNMERGEABLE fail while the new prctl is enabled for a process? > > 3. Add general_profit metric > > The general_profit metric of KSM is specified in the documentation, > but not calculated. This adds the general profit metric to > /sys/kernel/debug/mm/ksm. > > 4. Add more metrics to ksm_stat > > This adds the process profit and ksm type metric to > /proc/<pid>/ksm_stat. > > 5. Add more tests to ksm_tests > > This adds an option to specify the merge type to the ksm_tests. > This allows to test madvise and prctl KSM. It also adds a new option > to query if prctl KSM has been enabled. It adds a fork test to verify > that the KSM process setting is inherited by client processes. > > An update to the prctl(2) manpage has been proposed at [1]. > > This patch (of 3): > > This adds a new prctl to API to enable and disable KSM on a per process > basis instead of only at the VMA basis (with madvise). > > 1) Introduce new MMF_VM_MERGE_ANY flag > > This introduces the new flag MMF_VM_MERGE_ANY flag. When this flag > is set, kernel samepage merging (ksm) gets enabled for all vma's of a > process. > > 2) add flag to __ksm_enter > > This change adds the flag parameter to __ksm_enter. This allows to > distinguish if ksm was called by prctl or madvise. > > 3) add flag to __ksm_exit call > > This adds the flag parameter to the __ksm_exit() call. This allows > to distinguish if this call is for an prctl or madvise invocation. > > 4) invoke madvise for all vmas in scan_get_next_rmap_item > > If the new flag MMF_VM_MERGE_ANY has been set for a process, iterate > over all the vmas and enable ksm if possible. For the vmas that can be > ksm enabled this is only done once. > > 5) support disabling of ksm for a process > > This adds the ability to disable ksm for a process if ksm has been > enabled for the process. > > 6) add new prctl option to get and set ksm for a process > > This adds two new options to the prctl system call > - enable ksm for all vmas of a process (if the vmas support it). > - query if ksm has been enabled for a process. Did you consider, instead of handling MMF_VM_MERGE_ANY in a special way, to instead make it reuse the existing MMF_VM_MERGEABLE/VM_MERGEABLE infrastructure. Especially: 1) During prctl(MMF_VM_MERGE_ANY), set VM_MERGABLE on all applicable compatible. Further, set MMF_VM_MERGEABLE and enter KSM if not already set. 2) When creating a new, compatible VMA and MMF_VM_MERGE_ANY is set, set VM_MERGABLE? The you can avoid all runtime checks for compatible VMAs and only look at the VM_MERGEABLE flag. In fact, the VM_MERGEABLE will be completely expressive then for all VMAs. You don't need vma_ksm_mergeable() then. Another thing to consider is interaction with arch/s390/mm/gmap.c: s390x/kvm does not support KSM and it has to disable it for all VMAs. We have to find a way to fence the prctl (for example, fail setting the prctl after gmap_mark_unmergeable() ran, and make gmap_mark_unmergeable() fail if the prctl ran -- or handle it gracefully in some other way). > > Link: https://lkml.kernel.org/r/20230227220206.436662-1-shr@devkernel.io [1] > Link: https://lkml.kernel.org/r/20230224044000.3084046-1-shr@devkernel.io > Link: https://lkml.kernel.org/r/20230224044000.3084046-2-shr@devkernel.io > Signed-off-by: Stefan Roesch <shr@devkernel.io> > Cc: David Hildenbrand <david@redhat.com> > Cc: Johannes Weiner <hannes@cmpxchg.org> > Cc: Michal Hocko <mhocko@suse.com> > Cc: Rik van Riel <riel@surriel.com> > Cc: Bagas Sanjaya <bagasdotme@gmail.com> > Signed-off-by: Andrew Morton <akpm@linux-foundation.org> > --- > include/linux/ksm.h | 14 ++++-- > include/linux/sched/coredump.h | 1 + > include/uapi/linux/prctl.h | 2 + > kernel/sys.c | 27 ++++++++++ > mm/ksm.c | 90 +++++++++++++++++++++++----------- > 5 files changed, 101 insertions(+), 33 deletions(-) > > diff --git a/include/linux/ksm.h b/include/linux/ksm.h > index 7e232ba59b86..d38a05a36298 100644 > --- a/include/linux/ksm.h > +++ b/include/linux/ksm.h > @@ -18,20 +18,24 @@ > #ifdef CONFIG_KSM > int ksm_madvise(struct vm_area_struct *vma, unsigned long start, > unsigned long end, int advice, unsigned long *vm_flags); > -int __ksm_enter(struct mm_struct *mm); > -void __ksm_exit(struct mm_struct *mm); > +int __ksm_enter(struct mm_struct *mm, int flag); > +void __ksm_exit(struct mm_struct *mm, int flag); > > static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm) > { > + if (test_bit(MMF_VM_MERGE_ANY, &oldmm->flags)) > + return __ksm_enter(mm, MMF_VM_MERGE_ANY); > if (test_bit(MMF_VM_MERGEABLE, &oldmm->flags)) > - return __ksm_enter(mm); > + return __ksm_enter(mm, MMF_VM_MERGEABLE); > return 0; > } > > static inline void ksm_exit(struct mm_struct *mm) > { > - if (test_bit(MMF_VM_MERGEABLE, &mm->flags)) > - __ksm_exit(mm); > + if (test_bit(MMF_VM_MERGE_ANY, &mm->flags)) > + __ksm_exit(mm, MMF_VM_MERGE_ANY); > + else if (test_bit(MMF_VM_MERGEABLE, &mm->flags)) > + __ksm_exit(mm, MMF_VM_MERGEABLE); > } > > /* > diff --git a/include/linux/sched/coredump.h b/include/linux/sched/coredump.h > index 0e17ae7fbfd3..0ee96ea7a0e9 100644 > --- a/include/linux/sched/coredump.h > +++ b/include/linux/sched/coredump.h > @@ -90,4 +90,5 @@ static inline int get_dumpable(struct mm_struct *mm) > #define MMF_INIT_MASK (MMF_DUMPABLE_MASK | MMF_DUMP_FILTER_MASK |\ > MMF_DISABLE_THP_MASK | MMF_HAS_MDWE_MASK) > > +#define MMF_VM_MERGE_ANY 29 > #endif /* _LINUX_SCHED_COREDUMP_H */ > diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h > index 1312a137f7fb..759b3f53e53f 100644 > --- a/include/uapi/linux/prctl.h > +++ b/include/uapi/linux/prctl.h > @@ -290,4 +290,6 @@ struct prctl_mm_map { > #define PR_SET_VMA 0x53564d41 > # define PR_SET_VMA_ANON_NAME 0 > > +#define PR_SET_MEMORY_MERGE 67 > +#define PR_GET_MEMORY_MERGE 68 > #endif /* _LINUX_PRCTL_H */ > diff --git a/kernel/sys.c b/kernel/sys.c > index 495cd87d9bf4..edc439b1cae9 100644 > --- a/kernel/sys.c > +++ b/kernel/sys.c > @@ -15,6 +15,7 @@ > #include <linux/highuid.h> > #include <linux/fs.h> > #include <linux/kmod.h> > +#include <linux/ksm.h> > #include <linux/perf_event.h> > #include <linux/resource.h> > #include <linux/kernel.h> > @@ -2661,6 +2662,32 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3, > case PR_SET_VMA: > error = prctl_set_vma(arg2, arg3, arg4, arg5); > break; > +#ifdef CONFIG_KSM > + case PR_SET_MEMORY_MERGE: > + if (!capable(CAP_SYS_RESOURCE)) > + return -EPERM; > + > + if (arg2) { > + if (mmap_write_lock_killable(me->mm)) > + return -EINTR; > + > + if (!test_bit(MMF_VM_MERGE_ANY, &me->mm->flags)) > + error = __ksm_enter(me->mm, MMF_VM_MERGE_ANY); Hm, I think this might be problematic if we alread called __ksm_enter() via madvise(). Maybe we should really consider making MMF_VM_MERGE_ANY set MMF_VM_MERGABLE instead. Like: error = 0; if(test_bit(MMF_VM_MERGEABLE, &me->mm->flags)) error = __ksm_enter(me->mm); if (!error) set_bit(MMF_VM_MERGE_ANY, &me->mm->flags); > + mmap_write_unlock(me->mm); > + } else { > + __ksm_exit(me->mm, MMF_VM_MERGE_ANY); Hm, I'd prefer if we really only call __ksm_exit() when we really exit the process. Is there a strong requirement to optimize disabling of KSM or would it be sufficient to clear the MMF_VM_MERGE_ANY flag here? Also, I wonder what happens if we have another VMA in that process that has it enabled .. Last but not least, wouldn't we want to do the same thing as MADV_UNMERGEABLE and actually unmerge the KSM pages? It smells like it could be simpler and more consistent to handle by letting PR_SET_MEMORY_MERGE piggy-back on MMF_VM_MERGABLE/VM_MERGABLE and mimic what ksm_madvise() does simply for all VMAs. > --- a/mm/ksm.c > +++ b/mm/ksm.c > @@ -534,16 +534,58 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr, > return (ret & VM_FAULT_OOM) ? -ENOMEM : 0; > } > > +static bool vma_ksm_compatible(struct vm_area_struct *vma) > +{ > + /* > + * Be somewhat over-protective for now! > + */ > + if (vma->vm_flags & (VM_MERGEABLE | VM_SHARED | VM_MAYSHARE | > + VM_PFNMAP | VM_IO | VM_DONTEXPAND | > + VM_HUGETLB | VM_MIXEDMAP)) > + return false; /* just ignore the advice */ That comment is kind-of stale and ksm_madvise() specific. > + The VM_MERGEABLE check is really only used for ksm_madvise() to return immediately. I suggest keeping it in ksm_madvise() -- "Already enabled". Returning "false" in that case looks wrong (it's not broken because you do an early check in vma_ksm_mergeable(), it's just semantically weird).
On 03.04.23 12:37, David Hildenbrand wrote: > On 10.03.23 19:28, Stefan Roesch wrote: >> Patch series "mm: process/cgroup ksm support", v3. >> >> So far KSM can only be enabled by calling madvise for memory regions. To >> be able to use KSM for more workloads, KSM needs to have the ability to be >> enabled / disabled at the process / cgroup level. >> >> Use case 1: >> >> The madvise call is not available in the programming language. An >> example for this are programs with forked workloads using a garbage >> collected language without pointers. In such a language madvise cannot >> be made available. >> >> In addition the addresses of objects get moved around as they are >> garbage collected. KSM sharing needs to be enabled "from the outside" >> for these type of workloads. > > I guess the interpreter could enable it (like a memory allocator could > enable it for the whole heap). But I get that it's much easier to enable > this per-process, and eventually only when a lot of the same processes > are running in that particular environment. > >> >> Use case 2: >> >> The same interpreter can also be used for workloads where KSM brings >> no benefit or even has overhead. We'd like to be able to enable KSM on >> a workload by workload basis. > > Agreed. A per-process control is also helpful to identidy workloads > where KSM might be beneficial (and to which degree). > >> >> Use case 3: >> >> With the madvise call sharing opportunities are only enabled for the >> current process: it is a workload-local decision. A considerable number >> of sharing opportuniites may exist across multiple workloads or jobs. >> Only a higler level entity like a job scheduler or container can know >> for certain if its running one or more instances of a job. That job >> scheduler however doesn't have the necessary internal worklaod knowledge >> to make targeted madvise calls. >> >> Security concerns: >> >> In previous discussions security concerns have been brought up. The >> problem is that an individual workload does not have the knowledge about >> what else is running on a machine. Therefore it has to be very >> conservative in what memory areas can be shared or not. However, if the >> system is dedicated to running multiple jobs within the same security >> domain, its the job scheduler that has the knowledge that sharing can be >> safely enabled and is even desirable. >> >> Performance: >> >> Experiments with using UKSM have shown a capacity increase of around >> 20%. >> > > As raised, it would be great to include more details about the workload > where this particulalry helps (e.g., a lot of Django processes operating > in the same domain). > >> >> 1. New options for prctl system command >> >> This patch series adds two new options to the prctl system call. >> The first one allows to enable KSM at the process level and the second >> one to query the setting. >> >> The setting will be inherited by child processes. >> >> With the above setting, KSM can be enabled for the seed process of a >> cgroup and all processes in the cgroup will inherit the setting. >> >> 2. Changes to KSM processing >> >> When KSM is enabled at the process level, the KSM code will iterate >> over all the VMA's and enable KSM for the eligible VMA's. >> >> When forking a process that has KSM enabled, the setting will be >> inherited by the new child process. >> >> In addition when KSM is disabled for a process, KSM will be disabled >> for the VMA's where KSM has been enabled. > > Do we want to make MADV_MERGEABLE/MADV_UNMERGEABLE fail while the new > prctl is enabled for a process? > >> >> 3. Add general_profit metric >> >> The general_profit metric of KSM is specified in the documentation, >> but not calculated. This adds the general profit metric to >> /sys/kernel/debug/mm/ksm. >> >> 4. Add more metrics to ksm_stat >> >> This adds the process profit and ksm type metric to >> /proc/<pid>/ksm_stat. >> >> 5. Add more tests to ksm_tests >> >> This adds an option to specify the merge type to the ksm_tests. >> This allows to test madvise and prctl KSM. It also adds a new option >> to query if prctl KSM has been enabled. It adds a fork test to verify >> that the KSM process setting is inherited by client processes. >> >> An update to the prctl(2) manpage has been proposed at [1]. >> >> This patch (of 3): >> >> This adds a new prctl to API to enable and disable KSM on a per process >> basis instead of only at the VMA basis (with madvise). >> >> 1) Introduce new MMF_VM_MERGE_ANY flag >> >> This introduces the new flag MMF_VM_MERGE_ANY flag. When this flag >> is set, kernel samepage merging (ksm) gets enabled for all vma's of a >> process. >> >> 2) add flag to __ksm_enter >> >> This change adds the flag parameter to __ksm_enter. This allows to >> distinguish if ksm was called by prctl or madvise. >> >> 3) add flag to __ksm_exit call >> >> This adds the flag parameter to the __ksm_exit() call. This allows >> to distinguish if this call is for an prctl or madvise invocation. >> >> 4) invoke madvise for all vmas in scan_get_next_rmap_item >> >> If the new flag MMF_VM_MERGE_ANY has been set for a process, iterate >> over all the vmas and enable ksm if possible. For the vmas that can be >> ksm enabled this is only done once. >> >> 5) support disabling of ksm for a process >> >> This adds the ability to disable ksm for a process if ksm has been >> enabled for the process. >> >> 6) add new prctl option to get and set ksm for a process >> >> This adds two new options to the prctl system call >> - enable ksm for all vmas of a process (if the vmas support it). >> - query if ksm has been enabled for a process. > > > Did you consider, instead of handling MMF_VM_MERGE_ANY in a special way, > to instead make it reuse the existing MMF_VM_MERGEABLE/VM_MERGEABLE > infrastructure. Especially: > > 1) During prctl(MMF_VM_MERGE_ANY), set VM_MERGABLE on all applicable > compatible. Further, set MMF_VM_MERGEABLE and enter KSM if not > already set. > > 2) When creating a new, compatible VMA and MMF_VM_MERGE_ANY is set, set > VM_MERGABLE? > > The you can avoid all runtime checks for compatible VMAs and only look > at the VM_MERGEABLE flag. In fact, the VM_MERGEABLE will be completely > expressive then for all VMAs. You don't need vma_ksm_mergeable() then. > > > Another thing to consider is interaction with arch/s390/mm/gmap.c: > s390x/kvm does not support KSM and it has to disable it for all VMAs. We > have to find a way to fence the prctl (for example, fail setting the > prctl after gmap_mark_unmergeable() ran, and make > gmap_mark_unmergeable() fail if the prctl ran -- or handle it gracefully > in some other way). Staring at that code, I wonder if the "mm->def_flags &= ~VM_MERGEABLE" is doing what it's supposed to do. I don't think this effectively prevents right now madvise() from getting re-enabled on that VMA. @Christian, Janosch, am I missing something?
David Hildenbrand <david@redhat.com> writes: > On 10.03.23 19:28, Stefan Roesch wrote: >> Patch series "mm: process/cgroup ksm support", v3. >> So far KSM can only be enabled by calling madvise for memory regions. To >> be able to use KSM for more workloads, KSM needs to have the ability to be >> enabled / disabled at the process / cgroup level. >> Use case 1: >> The madvise call is not available in the programming language. An >> example for this are programs with forked workloads using a garbage >> collected language without pointers. In such a language madvise cannot >> be made available. >> In addition the addresses of objects get moved around as they are >> garbage collected. KSM sharing needs to be enabled "from the outside" >> for these type of workloads. > > I guess the interpreter could enable it (like a memory allocator could enable it > for the whole heap). But I get that it's much easier to enable this per-process, > and eventually only when a lot of the same processes are running in that > particular environment. > We don't want it to get enabled for all workloads of that interpreter, instead we want to be able to select for which workloads we enable KSM. >> Use case 2: >> The same interpreter can also be used for workloads where KSM brings >> no benefit or even has overhead. We'd like to be able to enable KSM on >> a workload by workload basis. > > Agreed. A per-process control is also helpful to identidy workloads where KSM > might be beneficial (and to which degree). > >> Use case 3: >> With the madvise call sharing opportunities are only enabled for the >> current process: it is a workload-local decision. A considerable number >> of sharing opportuniites may exist across multiple workloads or jobs. >> Only a higler level entity like a job scheduler or container can know >> for certain if its running one or more instances of a job. That job >> scheduler however doesn't have the necessary internal worklaod knowledge >> to make targeted madvise calls. >> Security concerns: >> In previous discussions security concerns have been brought up. The >> problem is that an individual workload does not have the knowledge about >> what else is running on a machine. Therefore it has to be very >> conservative in what memory areas can be shared or not. However, if the >> system is dedicated to running multiple jobs within the same security >> domain, its the job scheduler that has the knowledge that sharing can be >> safely enabled and is even desirable. >> Performance: >> Experiments with using UKSM have shown a capacity increase of around >> 20%. >> > > As raised, it would be great to include more details about the workload where > this particulalry helps (e.g., a lot of Django processes operating in the same > domain). > I can add that the django processes are part of the same domain with the next version of the patch series. >> 1. New options for prctl system command >> This patch series adds two new options to the prctl system call. >> The first one allows to enable KSM at the process level and the second >> one to query the setting. >> The setting will be inherited by child processes. >> With the above setting, KSM can be enabled for the seed process of a >> cgroup and all processes in the cgroup will inherit the setting. >> 2. Changes to KSM processing >> When KSM is enabled at the process level, the KSM code will iterate >> over all the VMA's and enable KSM for the eligible VMA's. >> When forking a process that has KSM enabled, the setting will be >> inherited by the new child process. >> In addition when KSM is disabled for a process, KSM will be disabled >> for the VMA's where KSM has been enabled. > > Do we want to make MADV_MERGEABLE/MADV_UNMERGEABLE fail while the new prctl is > enabled for a process? I decided to allow enabling KSM with prctl even when MADV_MERGEABLE, this allows more flexibility. > >> 3. Add general_profit metric >> The general_profit metric of KSM is specified in the documentation, >> but not calculated. This adds the general profit metric to >> /sys/kernel/debug/mm/ksm. >> 4. Add more metrics to ksm_stat >> This adds the process profit and ksm type metric to >> /proc/<pid>/ksm_stat. >> 5. Add more tests to ksm_tests >> This adds an option to specify the merge type to the ksm_tests. >> This allows to test madvise and prctl KSM. It also adds a new option >> to query if prctl KSM has been enabled. It adds a fork test to verify >> that the KSM process setting is inherited by client processes. >> An update to the prctl(2) manpage has been proposed at [1]. >> This patch (of 3): >> This adds a new prctl to API to enable and disable KSM on a per process >> basis instead of only at the VMA basis (with madvise). >> 1) Introduce new MMF_VM_MERGE_ANY flag >> This introduces the new flag MMF_VM_MERGE_ANY flag. When this flag >> is set, kernel samepage merging (ksm) gets enabled for all vma's of a >> process. >> 2) add flag to __ksm_enter >> This change adds the flag parameter to __ksm_enter. This allows to >> distinguish if ksm was called by prctl or madvise. >> 3) add flag to __ksm_exit call >> This adds the flag parameter to the __ksm_exit() call. This allows >> to distinguish if this call is for an prctl or madvise invocation. >> 4) invoke madvise for all vmas in scan_get_next_rmap_item >> If the new flag MMF_VM_MERGE_ANY has been set for a process, iterate >> over all the vmas and enable ksm if possible. For the vmas that can be >> ksm enabled this is only done once. >> 5) support disabling of ksm for a process >> This adds the ability to disable ksm for a process if ksm has been >> enabled for the process. >> 6) add new prctl option to get and set ksm for a process >> This adds two new options to the prctl system call >> - enable ksm for all vmas of a process (if the vmas support it). >> - query if ksm has been enabled for a process. > > > Did you consider, instead of handling MMF_VM_MERGE_ANY in a special way, to > instead make it reuse the existing MMF_VM_MERGEABLE/VM_MERGEABLE infrastructure. > Especially: > > 1) During prctl(MMF_VM_MERGE_ANY), set VM_MERGABLE on all applicable > compatible. Further, set MMF_VM_MERGEABLE and enter KSM if not > already set. > > 2) When creating a new, compatible VMA and MMF_VM_MERGE_ANY is set, set > VM_MERGABLE? > > The you can avoid all runtime checks for compatible VMAs and only look at the > VM_MERGEABLE flag. In fact, the VM_MERGEABLE will be completely expressive then > for all VMAs. You don't need vma_ksm_mergeable() then. > I didn't consider the above approach, I can have a look. I can see the benefit of not needing vma_ksm_mergeable(). > Another thing to consider is interaction with arch/s390/mm/gmap.c: s390x/kvm > does not support KSM and it has to disable it for all VMAs. We have to find a > way to fence the prctl (for example, fail setting the prctl after > gmap_mark_unmergeable() ran, and make gmap_mark_unmergeable() fail if the prctl > ran -- or handle it gracefully in some other way). > > I'll have a look. >> Link: https://lkml.kernel.org/r/20230227220206.436662-1-shr@devkernel.io [1] >> Link: https://lkml.kernel.org/r/20230224044000.3084046-1-shr@devkernel.io >> Link: https://lkml.kernel.org/r/20230224044000.3084046-2-shr@devkernel.io >> Signed-off-by: Stefan Roesch <shr@devkernel.io> >> Cc: David Hildenbrand <david@redhat.com> >> Cc: Johannes Weiner <hannes@cmpxchg.org> >> Cc: Michal Hocko <mhocko@suse.com> >> Cc: Rik van Riel <riel@surriel.com> >> Cc: Bagas Sanjaya <bagasdotme@gmail.com> >> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> >> --- >> include/linux/ksm.h | 14 ++++-- >> include/linux/sched/coredump.h | 1 + >> include/uapi/linux/prctl.h | 2 + >> kernel/sys.c | 27 ++++++++++ >> mm/ksm.c | 90 +++++++++++++++++++++++----------- >> 5 files changed, 101 insertions(+), 33 deletions(-) >> diff --git a/include/linux/ksm.h b/include/linux/ksm.h >> index 7e232ba59b86..d38a05a36298 100644 >> --- a/include/linux/ksm.h >> +++ b/include/linux/ksm.h >> @@ -18,20 +18,24 @@ >> #ifdef CONFIG_KSM >> int ksm_madvise(struct vm_area_struct *vma, unsigned long start, >> unsigned long end, int advice, unsigned long *vm_flags); >> -int __ksm_enter(struct mm_struct *mm); >> -void __ksm_exit(struct mm_struct *mm); >> +int __ksm_enter(struct mm_struct *mm, int flag); >> +void __ksm_exit(struct mm_struct *mm, int flag); >> static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm) >> { >> + if (test_bit(MMF_VM_MERGE_ANY, &oldmm->flags)) >> + return __ksm_enter(mm, MMF_VM_MERGE_ANY); >> if (test_bit(MMF_VM_MERGEABLE, &oldmm->flags)) >> - return __ksm_enter(mm); >> + return __ksm_enter(mm, MMF_VM_MERGEABLE); >> return 0; >> } >> static inline void ksm_exit(struct mm_struct *mm) >> { >> - if (test_bit(MMF_VM_MERGEABLE, &mm->flags)) >> - __ksm_exit(mm); >> + if (test_bit(MMF_VM_MERGE_ANY, &mm->flags)) >> + __ksm_exit(mm, MMF_VM_MERGE_ANY); >> + else if (test_bit(MMF_VM_MERGEABLE, &mm->flags)) >> + __ksm_exit(mm, MMF_VM_MERGEABLE); >> } >> /* >> diff --git a/include/linux/sched/coredump.h b/include/linux/sched/coredump.h >> index 0e17ae7fbfd3..0ee96ea7a0e9 100644 >> --- a/include/linux/sched/coredump.h >> +++ b/include/linux/sched/coredump.h >> @@ -90,4 +90,5 @@ static inline int get_dumpable(struct mm_struct *mm) >> #define MMF_INIT_MASK (MMF_DUMPABLE_MASK | MMF_DUMP_FILTER_MASK |\ >> MMF_DISABLE_THP_MASK | MMF_HAS_MDWE_MASK) >> +#define MMF_VM_MERGE_ANY 29 >> #endif /* _LINUX_SCHED_COREDUMP_H */ >> diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h >> index 1312a137f7fb..759b3f53e53f 100644 >> --- a/include/uapi/linux/prctl.h >> +++ b/include/uapi/linux/prctl.h >> @@ -290,4 +290,6 @@ struct prctl_mm_map { >> #define PR_SET_VMA 0x53564d41 >> # define PR_SET_VMA_ANON_NAME 0 >> +#define PR_SET_MEMORY_MERGE 67 >> +#define PR_GET_MEMORY_MERGE 68 >> #endif /* _LINUX_PRCTL_H */ >> diff --git a/kernel/sys.c b/kernel/sys.c >> index 495cd87d9bf4..edc439b1cae9 100644 >> --- a/kernel/sys.c >> +++ b/kernel/sys.c >> @@ -15,6 +15,7 @@ >> #include <linux/highuid.h> >> #include <linux/fs.h> >> #include <linux/kmod.h> >> +#include <linux/ksm.h> >> #include <linux/perf_event.h> >> #include <linux/resource.h> >> #include <linux/kernel.h> >> @@ -2661,6 +2662,32 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3, >> case PR_SET_VMA: >> error = prctl_set_vma(arg2, arg3, arg4, arg5); >> break; >> +#ifdef CONFIG_KSM >> + case PR_SET_MEMORY_MERGE: >> + if (!capable(CAP_SYS_RESOURCE)) >> + return -EPERM; >> + >> + if (arg2) { >> + if (mmap_write_lock_killable(me->mm)) >> + return -EINTR; >> + >> + if (!test_bit(MMF_VM_MERGE_ANY, &me->mm->flags)) >> + error = __ksm_enter(me->mm, MMF_VM_MERGE_ANY); > > Hm, I think this might be problematic if we alread called __ksm_enter() via > madvise(). Maybe we should really consider making MMF_VM_MERGE_ANY set > MMF_VM_MERGABLE instead. Like: > > error = 0; > if(test_bit(MMF_VM_MERGEABLE, &me->mm->flags)) > error = __ksm_enter(me->mm); > if (!error) > set_bit(MMF_VM_MERGE_ANY, &me->mm->flags); > If we make that change, we would no longer be able to distinguish if MMF_VM_MERGEABLE or MMF_VM_MERGE_ANY have been set. >> + mmap_write_unlock(me->mm); >> + } else { >> + __ksm_exit(me->mm, MMF_VM_MERGE_ANY); > > Hm, I'd prefer if we really only call __ksm_exit() when we really exit the > process. Is there a strong requirement to optimize disabling of KSM or would it > be sufficient to clear the MMF_VM_MERGE_ANY flag here? > Then we still have the mm_slot allocated until the process gets terminated. > Also, I wonder what happens if we have another VMA in that process that has it > enabled .. > > Last but not least, wouldn't we want to do the same thing as MADV_UNMERGEABLE > and actually unmerge the KSM pages? > Do you want to call unmerge for all VMA's? > > It smells like it could be simpler and more consistent to handle by letting > PR_SET_MEMORY_MERGE piggy-back on MMF_VM_MERGABLE/VM_MERGABLE and mimic what > ksm_madvise() does simply for all VMAs. > >> --- a/mm/ksm.c >> +++ b/mm/ksm.c >> @@ -534,16 +534,58 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr, >> return (ret & VM_FAULT_OOM) ? -ENOMEM : 0; >> } >> +static bool vma_ksm_compatible(struct vm_area_struct *vma) >> +{ >> + /* >> + * Be somewhat over-protective for now! >> + */ >> + if (vma->vm_flags & (VM_MERGEABLE | VM_SHARED | VM_MAYSHARE | >> + VM_PFNMAP | VM_IO | VM_DONTEXPAND | >> + VM_HUGETLB | VM_MIXEDMAP)) >> + return false; /* just ignore the advice */ > > That comment is kind-of stale and ksm_madvise() specific. > I'll remove the comment. >> + > > The VM_MERGEABLE check is really only used for ksm_madvise() to return > immediately. I suggest keeping it in ksm_madvise() -- "Already enabled". > Returning "false" in that case looks wrong (it's not broken because you do an > early check in vma_ksm_mergeable(), it's just semantically weird). > I'll make it in ksm_madvise and remove it here.
On 03.04.23 17:50, Stefan Roesch wrote: >> I guess the interpreter could enable it (like a memory allocator could enable it >> for the whole heap). But I get that it's much easier to enable this per-process, >> and eventually only when a lot of the same processes are running in that >> particular environment. >> > > We don't want it to get enabled for all workloads of that interpreter, > instead we want to be able to select for which workloads we enable KSM. > Right. > >>> 1. New options for prctl system command >>> This patch series adds two new options to the prctl system call. >>> The first one allows to enable KSM at the process level and the second >>> one to query the setting. >>> The setting will be inherited by child processes. >>> With the above setting, KSM can be enabled for the seed process of a >>> cgroup and all processes in the cgroup will inherit the setting. >>> 2. Changes to KSM processing >>> When KSM is enabled at the process level, the KSM code will iterate >>> over all the VMA's and enable KSM for the eligible VMA's. >>> When forking a process that has KSM enabled, the setting will be >>> inherited by the new child process. >>> In addition when KSM is disabled for a process, KSM will be disabled >>> for the VMA's where KSM has been enabled. >> >> Do we want to make MADV_MERGEABLE/MADV_UNMERGEABLE fail while the new prctl is >> enabled for a process? > > I decided to allow enabling KSM with prctl even when MADV_MERGEABLE, > this allows more flexibility. MADV_MERGEABLE will be a nop. But IIUC, MADV_UNMERGEABLE will end up calling unmerge_ksm_pages() and clear VM_MERGEABLE. But then, the next KSM scan will merge the pages in there again. Not sure if that flexibility is worth having. [...] >>> @@ -2661,6 +2662,32 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3, >>> case PR_SET_VMA: >>> error = prctl_set_vma(arg2, arg3, arg4, arg5); >>> break; >>> +#ifdef CONFIG_KSM >>> + case PR_SET_MEMORY_MERGE: >>> + if (!capable(CAP_SYS_RESOURCE)) >>> + return -EPERM; >>> + >>> + if (arg2) { >>> + if (mmap_write_lock_killable(me->mm)) >>> + return -EINTR; >>> + >>> + if (!test_bit(MMF_VM_MERGE_ANY, &me->mm->flags)) >>> + error = __ksm_enter(me->mm, MMF_VM_MERGE_ANY); >> >> Hm, I think this might be problematic if we alread called __ksm_enter() via >> madvise(). Maybe we should really consider making MMF_VM_MERGE_ANY set >> MMF_VM_MERGABLE instead. Like: >> >> error = 0; >> if(test_bit(MMF_VM_MERGEABLE, &me->mm->flags)) >> error = __ksm_enter(me->mm); >> if (!error) >> set_bit(MMF_VM_MERGE_ANY, &me->mm->flags); >> > > If we make that change, we would no longer be able to distinguish > if MMF_VM_MERGEABLE or MMF_VM_MERGE_ANY have been set. Why would you need that exactly? To cleanup? See below. > >>> + mmap_write_unlock(me->mm); >>> + } else { >>> + __ksm_exit(me->mm, MMF_VM_MERGE_ANY); >> >> Hm, I'd prefer if we really only call __ksm_exit() when we really exit the >> process. Is there a strong requirement to optimize disabling of KSM or would it >> be sufficient to clear the MMF_VM_MERGE_ANY flag here? >> > Then we still have the mm_slot allocated until the process gets > terminated. Which is the same as using MADV_UNMERGEABLE, no? > >> Also, I wonder what happens if we have another VMA in that process that has it >> enabled .. >> >> Last but not least, wouldn't we want to do the same thing as MADV_UNMERGEABLE >> and actually unmerge the KSM pages? >> > Do you want to call unmerge for all VMA's? The question is what clearing MMF_VM_MERGE_ANY is supposed to do. If it's supposed to disable KSM (like MADV_UNMERGEABLE) would, then I guess you should go over all VMA's and unmerge. Also, it depend on how you want to handle VM_MERGABLE with MMF_VM_MERGE_ANY. If MMF_VM_MERGE_ANY would not set VM_MERGABLE, then you'd only unmerge where VM_MERGABLE is not set. Otherwise, you'd unshare everywhere where VM_MERGABLE is set (and clear VM_MERGABLE) while at it. Unsharing when clearing MMF_VM_MERGE_ANY might be the right thing to do IMHO. I guess the main questions regarding implementation are: 1) Do we want setting MMF_VM_MERGE_ANY to set VM_MERGABLE on all candidate VMA's (go over all VMA's and set VM_MERGABLE). Then, clearing MMF_VM_MERGE_ANY would simply unmerge and clear VM_MERGABLE on all VMA's. 2) Do we want to make MMF_VM_MERGE_ANY imply MMF_VM_MERGABLE. You could still disable KSM (__ksm_exit()) during clearing MMF_VM_MERGE_ANY after going over all VMA's (where you might want to unshare already either way). I guess the code will end up simpler if you make MMF_VM_MERGE_ANY simply piggy-back on MMF_VM_MERGABLE + VM_MERGABLE. I might be wrong, of course.
David Hildenbrand <david@redhat.com> writes: > On 03.04.23 12:37, David Hildenbrand wrote: >> On 10.03.23 19:28, Stefan Roesch wrote: >>> Patch series "mm: process/cgroup ksm support", v3. >>> >>> So far KSM can only be enabled by calling madvise for memory regions. To >>> be able to use KSM for more workloads, KSM needs to have the ability to be >>> enabled / disabled at the process / cgroup level. >>> >>> Use case 1: >>> >>> The madvise call is not available in the programming language. An >>> example for this are programs with forked workloads using a garbage >>> collected language without pointers. In such a language madvise cannot >>> be made available. >>> >>> In addition the addresses of objects get moved around as they are >>> garbage collected. KSM sharing needs to be enabled "from the outside" >>> for these type of workloads. >> I guess the interpreter could enable it (like a memory allocator could >> enable it for the whole heap). But I get that it's much easier to enable >> this per-process, and eventually only when a lot of the same processes >> are running in that particular environment. >> >>> >>> Use case 2: >>> >>> The same interpreter can also be used for workloads where KSM brings >>> no benefit or even has overhead. We'd like to be able to enable KSM on >>> a workload by workload basis. >> Agreed. A per-process control is also helpful to identidy workloads >> where KSM might be beneficial (and to which degree). >> >>> >>> Use case 3: >>> >>> With the madvise call sharing opportunities are only enabled for the >>> current process: it is a workload-local decision. A considerable number >>> of sharing opportuniites may exist across multiple workloads or jobs. >>> Only a higler level entity like a job scheduler or container can know >>> for certain if its running one or more instances of a job. That job >>> scheduler however doesn't have the necessary internal worklaod knowledge >>> to make targeted madvise calls. >>> >>> Security concerns: >>> >>> In previous discussions security concerns have been brought up. The >>> problem is that an individual workload does not have the knowledge about >>> what else is running on a machine. Therefore it has to be very >>> conservative in what memory areas can be shared or not. However, if the >>> system is dedicated to running multiple jobs within the same security >>> domain, its the job scheduler that has the knowledge that sharing can be >>> safely enabled and is even desirable. >>> >>> Performance: >>> >>> Experiments with using UKSM have shown a capacity increase of around >>> 20%. >>> >> As raised, it would be great to include more details about the workload >> where this particulalry helps (e.g., a lot of Django processes operating >> in the same domain). >> >>> >>> 1. New options for prctl system command >>> >>> This patch series adds two new options to the prctl system call. >>> The first one allows to enable KSM at the process level and the second >>> one to query the setting. >>> >>> The setting will be inherited by child processes. >>> >>> With the above setting, KSM can be enabled for the seed process of a >>> cgroup and all processes in the cgroup will inherit the setting. >>> >>> 2. Changes to KSM processing >>> >>> When KSM is enabled at the process level, the KSM code will iterate >>> over all the VMA's and enable KSM for the eligible VMA's. >>> >>> When forking a process that has KSM enabled, the setting will be >>> inherited by the new child process. >>> >>> In addition when KSM is disabled for a process, KSM will be disabled >>> for the VMA's where KSM has been enabled. >> Do we want to make MADV_MERGEABLE/MADV_UNMERGEABLE fail while the new >> prctl is enabled for a process? >> >>> >>> 3. Add general_profit metric >>> >>> The general_profit metric of KSM is specified in the documentation, >>> but not calculated. This adds the general profit metric to >>> /sys/kernel/debug/mm/ksm. >>> >>> 4. Add more metrics to ksm_stat >>> >>> This adds the process profit and ksm type metric to >>> /proc/<pid>/ksm_stat. >>> >>> 5. Add more tests to ksm_tests >>> >>> This adds an option to specify the merge type to the ksm_tests. >>> This allows to test madvise and prctl KSM. It also adds a new option >>> to query if prctl KSM has been enabled. It adds a fork test to verify >>> that the KSM process setting is inherited by client processes. >>> >>> An update to the prctl(2) manpage has been proposed at [1]. >>> >>> This patch (of 3): >>> >>> This adds a new prctl to API to enable and disable KSM on a per process >>> basis instead of only at the VMA basis (with madvise). >>> >>> 1) Introduce new MMF_VM_MERGE_ANY flag >>> >>> This introduces the new flag MMF_VM_MERGE_ANY flag. When this flag >>> is set, kernel samepage merging (ksm) gets enabled for all vma's of a >>> process. >>> >>> 2) add flag to __ksm_enter >>> >>> This change adds the flag parameter to __ksm_enter. This allows to >>> distinguish if ksm was called by prctl or madvise. >>> >>> 3) add flag to __ksm_exit call >>> >>> This adds the flag parameter to the __ksm_exit() call. This allows >>> to distinguish if this call is for an prctl or madvise invocation. >>> >>> 4) invoke madvise for all vmas in scan_get_next_rmap_item >>> >>> If the new flag MMF_VM_MERGE_ANY has been set for a process, iterate >>> over all the vmas and enable ksm if possible. For the vmas that can be >>> ksm enabled this is only done once. >>> >>> 5) support disabling of ksm for a process >>> >>> This adds the ability to disable ksm for a process if ksm has been >>> enabled for the process. >>> >>> 6) add new prctl option to get and set ksm for a process >>> >>> This adds two new options to the prctl system call >>> - enable ksm for all vmas of a process (if the vmas support it). >>> - query if ksm has been enabled for a process. >> Did you consider, instead of handling MMF_VM_MERGE_ANY in a special way, >> to instead make it reuse the existing MMF_VM_MERGEABLE/VM_MERGEABLE >> infrastructure. Especially: >> 1) During prctl(MMF_VM_MERGE_ANY), set VM_MERGABLE on all applicable >> compatible. Further, set MMF_VM_MERGEABLE and enter KSM if not >> already set. >> 2) When creating a new, compatible VMA and MMF_VM_MERGE_ANY is set, set >> VM_MERGABLE? >> The you can avoid all runtime checks for compatible VMAs and only look >> at the VM_MERGEABLE flag. In fact, the VM_MERGEABLE will be completely >> expressive then for all VMAs. You don't need vma_ksm_mergeable() then. >> Another thing to consider is interaction with arch/s390/mm/gmap.c: >> s390x/kvm does not support KSM and it has to disable it for all VMAs. We >> have to find a way to fence the prctl (for example, fail setting the >> prctl after gmap_mark_unmergeable() ran, and make >> gmap_mark_unmergeable() fail if the prctl ran -- or handle it gracefully >> in some other way). gmap_mark_unmergeable() seems to have a problem today. We can execute gmap_mark_unmergeable() and mark the vma's as unmergeable, but shortly after that the process can run madvise on it again and make it mergeable. Am I mssing something here? Once prctl is run, we can check for the MMF_VM_MERGE_ANY flag in gmap_mark_unmergeable(). In case it is set, we can return an error. The error code path looks like it can handle that case. For the opposite case: gmap_mark_unmergeable() has already been run, we would need some kind of flag or other means to be able to detect it. Any recommendations? > > Staring at that code, I wonder if the "mm->def_flags &= ~VM_MERGEABLE" is doing > what it's supposed to do. I don't think this effectively prevents right now > madvise() from getting re-enabled on that VMA. > > @Christian, Janosch, am I missing something?
David Hildenbrand <david@redhat.com> writes: > On 03.04.23 12:37, David Hildenbrand wrote: >> On 10.03.23 19:28, Stefan Roesch wrote: >>> Patch series "mm: process/cgroup ksm support", v3. >>> >>> So far KSM can only be enabled by calling madvise for memory regions. To >>> be able to use KSM for more workloads, KSM needs to have the ability to be >>> enabled / disabled at the process / cgroup level. >>> >>> Use case 1: >>> >>> The madvise call is not available in the programming language. An >>> example for this are programs with forked workloads using a garbage >>> collected language without pointers. In such a language madvise cannot >>> be made available. >>> >>> In addition the addresses of objects get moved around as they are >>> garbage collected. KSM sharing needs to be enabled "from the outside" >>> for these type of workloads. >> I guess the interpreter could enable it (like a memory allocator could >> enable it for the whole heap). But I get that it's much easier to enable >> this per-process, and eventually only when a lot of the same processes >> are running in that particular environment. >> >>> >>> Use case 2: >>> >>> The same interpreter can also be used for workloads where KSM brings >>> no benefit or even has overhead. We'd like to be able to enable KSM on >>> a workload by workload basis. >> Agreed. A per-process control is also helpful to identidy workloads >> where KSM might be beneficial (and to which degree). >> >>> >>> Use case 3: >>> >>> With the madvise call sharing opportunities are only enabled for the >>> current process: it is a workload-local decision. A considerable number >>> of sharing opportuniites may exist across multiple workloads or jobs. >>> Only a higler level entity like a job scheduler or container can know >>> for certain if its running one or more instances of a job. That job >>> scheduler however doesn't have the necessary internal worklaod knowledge >>> to make targeted madvise calls. >>> >>> Security concerns: >>> >>> In previous discussions security concerns have been brought up. The >>> problem is that an individual workload does not have the knowledge about >>> what else is running on a machine. Therefore it has to be very >>> conservative in what memory areas can be shared or not. However, if the >>> system is dedicated to running multiple jobs within the same security >>> domain, its the job scheduler that has the knowledge that sharing can be >>> safely enabled and is even desirable. >>> >>> Performance: >>> >>> Experiments with using UKSM have shown a capacity increase of around >>> 20%. >>> >> As raised, it would be great to include more details about the workload >> where this particulalry helps (e.g., a lot of Django processes operating >> in the same domain). >> >>> >>> 1. New options for prctl system command >>> >>> This patch series adds two new options to the prctl system call. >>> The first one allows to enable KSM at the process level and the second >>> one to query the setting. >>> >>> The setting will be inherited by child processes. >>> >>> With the above setting, KSM can be enabled for the seed process of a >>> cgroup and all processes in the cgroup will inherit the setting. >>> >>> 2. Changes to KSM processing >>> >>> When KSM is enabled at the process level, the KSM code will iterate >>> over all the VMA's and enable KSM for the eligible VMA's. >>> >>> When forking a process that has KSM enabled, the setting will be >>> inherited by the new child process. >>> >>> In addition when KSM is disabled for a process, KSM will be disabled >>> for the VMA's where KSM has been enabled. >> Do we want to make MADV_MERGEABLE/MADV_UNMERGEABLE fail while the new >> prctl is enabled for a process? >> >>> >>> 3. Add general_profit metric >>> >>> The general_profit metric of KSM is specified in the documentation, >>> but not calculated. This adds the general profit metric to >>> /sys/kernel/debug/mm/ksm. >>> >>> 4. Add more metrics to ksm_stat >>> >>> This adds the process profit and ksm type metric to >>> /proc/<pid>/ksm_stat. >>> >>> 5. Add more tests to ksm_tests >>> >>> This adds an option to specify the merge type to the ksm_tests. >>> This allows to test madvise and prctl KSM. It also adds a new option >>> to query if prctl KSM has been enabled. It adds a fork test to verify >>> that the KSM process setting is inherited by client processes. >>> >>> An update to the prctl(2) manpage has been proposed at [1]. >>> >>> This patch (of 3): >>> >>> This adds a new prctl to API to enable and disable KSM on a per process >>> basis instead of only at the VMA basis (with madvise). >>> >>> 1) Introduce new MMF_VM_MERGE_ANY flag >>> >>> This introduces the new flag MMF_VM_MERGE_ANY flag. When this flag >>> is set, kernel samepage merging (ksm) gets enabled for all vma's of a >>> process. >>> >>> 2) add flag to __ksm_enter >>> >>> This change adds the flag parameter to __ksm_enter. This allows to >>> distinguish if ksm was called by prctl or madvise. >>> >>> 3) add flag to __ksm_exit call >>> >>> This adds the flag parameter to the __ksm_exit() call. This allows >>> to distinguish if this call is for an prctl or madvise invocation. >>> >>> 4) invoke madvise for all vmas in scan_get_next_rmap_item >>> >>> If the new flag MMF_VM_MERGE_ANY has been set for a process, iterate >>> over all the vmas and enable ksm if possible. For the vmas that can be >>> ksm enabled this is only done once. >>> >>> 5) support disabling of ksm for a process >>> >>> This adds the ability to disable ksm for a process if ksm has been >>> enabled for the process. >>> >>> 6) add new prctl option to get and set ksm for a process >>> >>> This adds two new options to the prctl system call >>> - enable ksm for all vmas of a process (if the vmas support it). >>> - query if ksm has been enabled for a process. >> Did you consider, instead of handling MMF_VM_MERGE_ANY in a special way, >> to instead make it reuse the existing MMF_VM_MERGEABLE/VM_MERGEABLE >> infrastructure. Especially: >> 1) During prctl(MMF_VM_MERGE_ANY), set VM_MERGABLE on all applicable >> compatible. Further, set MMF_VM_MERGEABLE and enter KSM if not >> already set. >> 2) When creating a new, compatible VMA and MMF_VM_MERGE_ANY is set, set >> VM_MERGABLE? >> The you can avoid all runtime checks for compatible VMAs and only look >> at the VM_MERGEABLE flag. In fact, the VM_MERGEABLE will be completely >> expressive then for all VMAs. You don't need vma_ksm_mergeable() then. >> Another thing to consider is interaction with arch/s390/mm/gmap.c: >> s390x/kvm does not support KSM and it has to disable it for all VMAs. We >> have to find a way to fence the prctl (for example, fail setting the >> prctl after gmap_mark_unmergeable() ran, and make >> gmap_mark_unmergeable() fail if the prctl ran -- or handle it gracefully >> in some other way). gmap_mark_unmergeable() seems to have a problem today. We can execute gmap_mark_unmergeable() and mark the vma's as unmergeable, but shortly after that the process can run madvise on it again and make it mergeable. Am I mssing something here? Once prctl is run, we can check for the MMF_VM_MERGE_ANY flag in gmap_mark_unmergeable(). In case it is set, we can return an error. The error code path looks like it can handle that case. For the opposite case: gmap_mark_unmergeable() has already been run, we would need some kind of flag or other means to be able to detect it. Any recommendations? > > > Staring at that code, I wonder if the "mm->def_flags &= ~VM_MERGEABLE" is doing > what it's supposed to do. I don't think this effectively prevents right now > madvise() from getting re-enabled on that VMA. > > @Christian, Janosch, am I missing something?
Am 03.04.23 um 13:03 schrieb David Hildenbrand: > On 03.04.23 12:37, David Hildenbrand wrote: >> On 10.03.23 19:28, Stefan Roesch wrote: >>> Patch series "mm: process/cgroup ksm support", v3. >>> >>> So far KSM can only be enabled by calling madvise for memory regions. To >>> be able to use KSM for more workloads, KSM needs to have the ability to be >>> enabled / disabled at the process / cgroup level. >>> >>> Use case 1: >>> >>> The madvise call is not available in the programming language. An >>> example for this are programs with forked workloads using a garbage >>> collected language without pointers. In such a language madvise cannot >>> be made available. >>> >>> In addition the addresses of objects get moved around as they are >>> garbage collected. KSM sharing needs to be enabled "from the outside" >>> for these type of workloads. >> >> I guess the interpreter could enable it (like a memory allocator could >> enable it for the whole heap). But I get that it's much easier to enable >> this per-process, and eventually only when a lot of the same processes >> are running in that particular environment. >> >>> >>> Use case 2: >>> >>> The same interpreter can also be used for workloads where KSM brings >>> no benefit or even has overhead. We'd like to be able to enable KSM on >>> a workload by workload basis. >> >> Agreed. A per-process control is also helpful to identidy workloads >> where KSM might be beneficial (and to which degree). >> >>> >>> Use case 3: >>> >>> With the madvise call sharing opportunities are only enabled for the >>> current process: it is a workload-local decision. A considerable number >>> of sharing opportuniites may exist across multiple workloads or jobs. >>> Only a higler level entity like a job scheduler or container can know >>> for certain if its running one or more instances of a job. That job >>> scheduler however doesn't have the necessary internal worklaod knowledge >>> to make targeted madvise calls. >>> >>> Security concerns: >>> >>> In previous discussions security concerns have been brought up. The >>> problem is that an individual workload does not have the knowledge about >>> what else is running on a machine. Therefore it has to be very >>> conservative in what memory areas can be shared or not. However, if the >>> system is dedicated to running multiple jobs within the same security >>> domain, its the job scheduler that has the knowledge that sharing can be >>> safely enabled and is even desirable. >>> >>> Performance: >>> >>> Experiments with using UKSM have shown a capacity increase of around >>> 20%. >>> >> >> As raised, it would be great to include more details about the workload >> where this particulalry helps (e.g., a lot of Django processes operating >> in the same domain). >> >>> >>> 1. New options for prctl system command >>> >>> This patch series adds two new options to the prctl system call. >>> The first one allows to enable KSM at the process level and the second >>> one to query the setting. >>> >>> The setting will be inherited by child processes. >>> >>> With the above setting, KSM can be enabled for the seed process of a >>> cgroup and all processes in the cgroup will inherit the setting. >>> >>> 2. Changes to KSM processing >>> >>> When KSM is enabled at the process level, the KSM code will iterate >>> over all the VMA's and enable KSM for the eligible VMA's. >>> >>> When forking a process that has KSM enabled, the setting will be >>> inherited by the new child process. >>> >>> In addition when KSM is disabled for a process, KSM will be disabled >>> for the VMA's where KSM has been enabled. >> >> Do we want to make MADV_MERGEABLE/MADV_UNMERGEABLE fail while the new >> prctl is enabled for a process? >> >>> >>> 3. Add general_profit metric >>> >>> The general_profit metric of KSM is specified in the documentation, >>> but not calculated. This adds the general profit metric to >>> /sys/kernel/debug/mm/ksm. >>> >>> 4. Add more metrics to ksm_stat >>> >>> This adds the process profit and ksm type metric to >>> /proc/<pid>/ksm_stat. >>> >>> 5. Add more tests to ksm_tests >>> >>> This adds an option to specify the merge type to the ksm_tests. >>> This allows to test madvise and prctl KSM. It also adds a new option >>> to query if prctl KSM has been enabled. It adds a fork test to verify >>> that the KSM process setting is inherited by client processes. >>> >>> An update to the prctl(2) manpage has been proposed at [1]. >>> >>> This patch (of 3): >>> >>> This adds a new prctl to API to enable and disable KSM on a per process >>> basis instead of only at the VMA basis (with madvise). >>> >>> 1) Introduce new MMF_VM_MERGE_ANY flag >>> >>> This introduces the new flag MMF_VM_MERGE_ANY flag. When this flag >>> is set, kernel samepage merging (ksm) gets enabled for all vma's of a >>> process. >>> >>> 2) add flag to __ksm_enter >>> >>> This change adds the flag parameter to __ksm_enter. This allows to >>> distinguish if ksm was called by prctl or madvise. >>> >>> 3) add flag to __ksm_exit call >>> >>> This adds the flag parameter to the __ksm_exit() call. This allows >>> to distinguish if this call is for an prctl or madvise invocation. >>> >>> 4) invoke madvise for all vmas in scan_get_next_rmap_item >>> >>> If the new flag MMF_VM_MERGE_ANY has been set for a process, iterate >>> over all the vmas and enable ksm if possible. For the vmas that can be >>> ksm enabled this is only done once. >>> >>> 5) support disabling of ksm for a process >>> >>> This adds the ability to disable ksm for a process if ksm has been >>> enabled for the process. >>> >>> 6) add new prctl option to get and set ksm for a process >>> >>> This adds two new options to the prctl system call >>> - enable ksm for all vmas of a process (if the vmas support it). >>> - query if ksm has been enabled for a process. >> >> >> Did you consider, instead of handling MMF_VM_MERGE_ANY in a special way, >> to instead make it reuse the existing MMF_VM_MERGEABLE/VM_MERGEABLE >> infrastructure. Especially: >> >> 1) During prctl(MMF_VM_MERGE_ANY), set VM_MERGABLE on all applicable >> compatible. Further, set MMF_VM_MERGEABLE and enter KSM if not >> already set. >> >> 2) When creating a new, compatible VMA and MMF_VM_MERGE_ANY is set, set >> VM_MERGABLE? >> >> The you can avoid all runtime checks for compatible VMAs and only look >> at the VM_MERGEABLE flag. In fact, the VM_MERGEABLE will be completely >> expressive then for all VMAs. You don't need vma_ksm_mergeable() then. >> >> >> Another thing to consider is interaction with arch/s390/mm/gmap.c: >> s390x/kvm does not support KSM and it has to disable it for all VMAs. We Normally we do support KSM on s390. This is a special case for guests using storage keys. Those are attributes of the physical page and might differ even if the content of the page is the same. New Linux no longer uses it (unless a debug option is set during build) so we enable the guest storage keys lazy and break KSM pages in that process. Ideally we would continue this semantic (e.g. even after a prctl, if the guest enable storage keys, disable ksm for this VM). >> have to find a way to fence the prctl (for example, fail setting the >> prctl after gmap_mark_unmergeable() ran, and make >> gmap_mark_unmergeable() fail if the prctl ran -- or handle it gracefully >> in some other way). > > > Staring at that code, I wonder if the "mm->def_flags &= ~VM_MERGEABLE" is doing what it's supposed to do. I don't think this effectively prevents right now madvise() from getting re-enabled on that VMA. > > @Christian, Janosch, am I missing something? Yes, if QEMU would do an madvise later on instead of just the start if would result in guest storage keys to be messed up on KSM merges. One could argue that this is a bug in the hypervisor then (QEMU) but yes, we should try to make this more reliable in the kernel.
On 05.04.23 08:51, Christian Borntraeger wrote: > Am 03.04.23 um 13:03 schrieb David Hildenbrand: >> On 03.04.23 12:37, David Hildenbrand wrote: >>> On 10.03.23 19:28, Stefan Roesch wrote: >>>> Patch series "mm: process/cgroup ksm support", v3. >>>> >>>> So far KSM can only be enabled by calling madvise for memory regions. To >>>> be able to use KSM for more workloads, KSM needs to have the ability to be >>>> enabled / disabled at the process / cgroup level. >>>> >>>> Use case 1: >>>> >>>> The madvise call is not available in the programming language. An >>>> example for this are programs with forked workloads using a garbage >>>> collected language without pointers. In such a language madvise cannot >>>> be made available. >>>> >>>> In addition the addresses of objects get moved around as they are >>>> garbage collected. KSM sharing needs to be enabled "from the outside" >>>> for these type of workloads. >>> >>> I guess the interpreter could enable it (like a memory allocator could >>> enable it for the whole heap). But I get that it's much easier to enable >>> this per-process, and eventually only when a lot of the same processes >>> are running in that particular environment. >>> >>>> >>>> Use case 2: >>>> >>>> The same interpreter can also be used for workloads where KSM brings >>>> no benefit or even has overhead. We'd like to be able to enable KSM on >>>> a workload by workload basis. >>> >>> Agreed. A per-process control is also helpful to identidy workloads >>> where KSM might be beneficial (and to which degree). >>> >>>> >>>> Use case 3: >>>> >>>> With the madvise call sharing opportunities are only enabled for the >>>> current process: it is a workload-local decision. A considerable number >>>> of sharing opportuniites may exist across multiple workloads or jobs. >>>> Only a higler level entity like a job scheduler or container can know >>>> for certain if its running one or more instances of a job. That job >>>> scheduler however doesn't have the necessary internal worklaod knowledge >>>> to make targeted madvise calls. >>>> >>>> Security concerns: >>>> >>>> In previous discussions security concerns have been brought up. The >>>> problem is that an individual workload does not have the knowledge about >>>> what else is running on a machine. Therefore it has to be very >>>> conservative in what memory areas can be shared or not. However, if the >>>> system is dedicated to running multiple jobs within the same security >>>> domain, its the job scheduler that has the knowledge that sharing can be >>>> safely enabled and is even desirable. >>>> >>>> Performance: >>>> >>>> Experiments with using UKSM have shown a capacity increase of around >>>> 20%. >>>> >>> >>> As raised, it would be great to include more details about the workload >>> where this particulalry helps (e.g., a lot of Django processes operating >>> in the same domain). >>> >>>> >>>> 1. New options for prctl system command >>>> >>>> This patch series adds two new options to the prctl system call. >>>> The first one allows to enable KSM at the process level and the second >>>> one to query the setting. >>>> >>>> The setting will be inherited by child processes. >>>> >>>> With the above setting, KSM can be enabled for the seed process of a >>>> cgroup and all processes in the cgroup will inherit the setting. >>>> >>>> 2. Changes to KSM processing >>>> >>>> When KSM is enabled at the process level, the KSM code will iterate >>>> over all the VMA's and enable KSM for the eligible VMA's. >>>> >>>> When forking a process that has KSM enabled, the setting will be >>>> inherited by the new child process. >>>> >>>> In addition when KSM is disabled for a process, KSM will be disabled >>>> for the VMA's where KSM has been enabled. >>> >>> Do we want to make MADV_MERGEABLE/MADV_UNMERGEABLE fail while the new >>> prctl is enabled for a process? >>> >>>> >>>> 3. Add general_profit metric >>>> >>>> The general_profit metric of KSM is specified in the documentation, >>>> but not calculated. This adds the general profit metric to >>>> /sys/kernel/debug/mm/ksm. >>>> >>>> 4. Add more metrics to ksm_stat >>>> >>>> This adds the process profit and ksm type metric to >>>> /proc/<pid>/ksm_stat. >>>> >>>> 5. Add more tests to ksm_tests >>>> >>>> This adds an option to specify the merge type to the ksm_tests. >>>> This allows to test madvise and prctl KSM. It also adds a new option >>>> to query if prctl KSM has been enabled. It adds a fork test to verify >>>> that the KSM process setting is inherited by client processes. >>>> >>>> An update to the prctl(2) manpage has been proposed at [1]. >>>> >>>> This patch (of 3): >>>> >>>> This adds a new prctl to API to enable and disable KSM on a per process >>>> basis instead of only at the VMA basis (with madvise). >>>> >>>> 1) Introduce new MMF_VM_MERGE_ANY flag >>>> >>>> This introduces the new flag MMF_VM_MERGE_ANY flag. When this flag >>>> is set, kernel samepage merging (ksm) gets enabled for all vma's of a >>>> process. >>>> >>>> 2) add flag to __ksm_enter >>>> >>>> This change adds the flag parameter to __ksm_enter. This allows to >>>> distinguish if ksm was called by prctl or madvise. >>>> >>>> 3) add flag to __ksm_exit call >>>> >>>> This adds the flag parameter to the __ksm_exit() call. This allows >>>> to distinguish if this call is for an prctl or madvise invocation. >>>> >>>> 4) invoke madvise for all vmas in scan_get_next_rmap_item >>>> >>>> If the new flag MMF_VM_MERGE_ANY has been set for a process, iterate >>>> over all the vmas and enable ksm if possible. For the vmas that can be >>>> ksm enabled this is only done once. >>>> >>>> 5) support disabling of ksm for a process >>>> >>>> This adds the ability to disable ksm for a process if ksm has been >>>> enabled for the process. >>>> >>>> 6) add new prctl option to get and set ksm for a process >>>> >>>> This adds two new options to the prctl system call >>>> - enable ksm for all vmas of a process (if the vmas support it). >>>> - query if ksm has been enabled for a process. >>> >>> >>> Did you consider, instead of handling MMF_VM_MERGE_ANY in a special way, >>> to instead make it reuse the existing MMF_VM_MERGEABLE/VM_MERGEABLE >>> infrastructure. Especially: >>> >>> 1) During prctl(MMF_VM_MERGE_ANY), set VM_MERGABLE on all applicable >>> compatible. Further, set MMF_VM_MERGEABLE and enter KSM if not >>> already set. >>> >>> 2) When creating a new, compatible VMA and MMF_VM_MERGE_ANY is set, set >>> VM_MERGABLE? >>> >>> The you can avoid all runtime checks for compatible VMAs and only look >>> at the VM_MERGEABLE flag. In fact, the VM_MERGEABLE will be completely >>> expressive then for all VMAs. You don't need vma_ksm_mergeable() then. >>> >>> >>> Another thing to consider is interaction with arch/s390/mm/gmap.c: >>> s390x/kvm does not support KSM and it has to disable it for all VMAs. We > > Normally we do support KSM on s390. This is a special case for guests using > storage keys. Those are attributes of the physical page and might differ even > if the content of the page is the same. > New Linux no longer uses it (unless a debug option is set during build) so we > enable the guest storage keys lazy and break KSM pages in that process. > Ideally we would continue this semantic (e.g. even after a prctl, if the > guest enable storage keys, disable ksm for this VM). IIRC, KSM also gets disabled when switching to protected VMs. I recall that we really wanted to stop KSM scanning pages that are possibly protected. (don't remember if one could harm the system enabling it before/after the switch) > >>> have to find a way to fence the prctl (for example, fail setting the >>> prctl after gmap_mark_unmergeable() ran, and make >>> gmap_mark_unmergeable() fail if the prctl ran -- or handle it gracefully >>> in some other way). >> >> >> Staring at that code, I wonder if the "mm->def_flags &= ~VM_MERGEABLE" is doing what it's supposed to do. I don't think this effectively prevents right now madvise() from getting re-enabled on that VMA. >> >> @Christian, Janosch, am I missing something? > > Yes, if QEMU would do an madvise later on instead of just the start if would > result in guest storage keys to be messed up on KSM merges. One could argue > that this is a bug in the hypervisor then (QEMU) but yes, we should try > to make this more reliable in the kernel. It looks like the "mm->def_flags &= ~VM_MERGEABLE" wanted to achieve that, but failed. At least it looks like completely unnecessary code if I am not wrong. Maybe inspired by similar code in thp_split_mm(), that enforces VM_NOHUGEPAGE.
diff --git a/include/linux/ksm.h b/include/linux/ksm.h index 7e232ba59b86..d38a05a36298 100644 --- a/include/linux/ksm.h +++ b/include/linux/ksm.h @@ -18,20 +18,24 @@ #ifdef CONFIG_KSM int ksm_madvise(struct vm_area_struct *vma, unsigned long start, unsigned long end, int advice, unsigned long *vm_flags); -int __ksm_enter(struct mm_struct *mm); -void __ksm_exit(struct mm_struct *mm); +int __ksm_enter(struct mm_struct *mm, int flag); +void __ksm_exit(struct mm_struct *mm, int flag); static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm) { + if (test_bit(MMF_VM_MERGE_ANY, &oldmm->flags)) + return __ksm_enter(mm, MMF_VM_MERGE_ANY); if (test_bit(MMF_VM_MERGEABLE, &oldmm->flags)) - return __ksm_enter(mm); + return __ksm_enter(mm, MMF_VM_MERGEABLE); return 0; } static inline void ksm_exit(struct mm_struct *mm) { - if (test_bit(MMF_VM_MERGEABLE, &mm->flags)) - __ksm_exit(mm); + if (test_bit(MMF_VM_MERGE_ANY, &mm->flags)) + __ksm_exit(mm, MMF_VM_MERGE_ANY); + else if (test_bit(MMF_VM_MERGEABLE, &mm->flags)) + __ksm_exit(mm, MMF_VM_MERGEABLE); } /* diff --git a/include/linux/sched/coredump.h b/include/linux/sched/coredump.h index 0e17ae7fbfd3..0ee96ea7a0e9 100644 --- a/include/linux/sched/coredump.h +++ b/include/linux/sched/coredump.h @@ -90,4 +90,5 @@ static inline int get_dumpable(struct mm_struct *mm) #define MMF_INIT_MASK (MMF_DUMPABLE_MASK | MMF_DUMP_FILTER_MASK |\ MMF_DISABLE_THP_MASK | MMF_HAS_MDWE_MASK) +#define MMF_VM_MERGE_ANY 29 #endif /* _LINUX_SCHED_COREDUMP_H */ diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h index 1312a137f7fb..759b3f53e53f 100644 --- a/include/uapi/linux/prctl.h +++ b/include/uapi/linux/prctl.h @@ -290,4 +290,6 @@ struct prctl_mm_map { #define PR_SET_VMA 0x53564d41 # define PR_SET_VMA_ANON_NAME 0 +#define PR_SET_MEMORY_MERGE 67 +#define PR_GET_MEMORY_MERGE 68 #endif /* _LINUX_PRCTL_H */ diff --git a/kernel/sys.c b/kernel/sys.c index 495cd87d9bf4..edc439b1cae9 100644 --- a/kernel/sys.c +++ b/kernel/sys.c @@ -15,6 +15,7 @@ #include <linux/highuid.h> #include <linux/fs.h> #include <linux/kmod.h> +#include <linux/ksm.h> #include <linux/perf_event.h> #include <linux/resource.h> #include <linux/kernel.h> @@ -2661,6 +2662,32 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3, case PR_SET_VMA: error = prctl_set_vma(arg2, arg3, arg4, arg5); break; +#ifdef CONFIG_KSM + case PR_SET_MEMORY_MERGE: + if (!capable(CAP_SYS_RESOURCE)) + return -EPERM; + + if (arg2) { + if (mmap_write_lock_killable(me->mm)) + return -EINTR; + + if (!test_bit(MMF_VM_MERGE_ANY, &me->mm->flags)) + error = __ksm_enter(me->mm, MMF_VM_MERGE_ANY); + mmap_write_unlock(me->mm); + } else { + __ksm_exit(me->mm, MMF_VM_MERGE_ANY); + } + break; + case PR_GET_MEMORY_MERGE: + if (!capable(CAP_SYS_RESOURCE)) + return -EPERM; + + if (arg2 || arg3 || arg4 || arg5) + return -EINVAL; + + error = !!test_bit(MMF_VM_MERGE_ANY, &me->mm->flags); + break; +#endif default: error = -EINVAL; break; diff --git a/mm/ksm.c b/mm/ksm.c index d7bd28199f6c..b8e6e734dd69 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -534,16 +534,58 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr, return (ret & VM_FAULT_OOM) ? -ENOMEM : 0; } +static bool vma_ksm_compatible(struct vm_area_struct *vma) +{ + /* + * Be somewhat over-protective for now! + */ + if (vma->vm_flags & (VM_MERGEABLE | VM_SHARED | VM_MAYSHARE | + VM_PFNMAP | VM_IO | VM_DONTEXPAND | + VM_HUGETLB | VM_MIXEDMAP)) + return false; /* just ignore the advice */ + + if (vma_is_dax(vma)) + return false; + +#ifdef VM_SAO + if (*vm_flags & VM_SAO) + return false; +#endif +#ifdef VM_SPARC_ADI + if (*vm_flags & VM_SPARC_ADI) + return false; +#endif + + return true; +} + +static bool vma_ksm_mergeable(struct vm_area_struct *vma) +{ + if (vma->vm_flags & VM_MERGEABLE) + return true; + + if (test_bit(MMF_VM_MERGE_ANY, &vma->vm_mm->flags) && + vma_ksm_compatible(vma)) + return true; + + return false; +} + static struct vm_area_struct *find_mergeable_vma(struct mm_struct *mm, unsigned long addr) { struct vm_area_struct *vma; + if (ksm_test_exit(mm)) return NULL; + vma = vma_lookup(mm, addr); - if (!vma || !(vma->vm_flags & VM_MERGEABLE) || !vma->anon_vma) + if (!vma || !vma->anon_vma) return NULL; - return vma; + if (vma_ksm_mergeable(vma)) + return vma; + + return NULL; } static void break_cow(struct ksm_rmap_item *rmap_item) @@ -1042,7 +1084,7 @@ static int unmerge_and_remove_all_rmap_items(void) goto mm_exiting; for_each_vma(vmi, vma) { - if (!(vma->vm_flags & VM_MERGEABLE) || !vma->anon_vma) + if (!vma_ksm_mergeable(vma) || !vma->anon_vma) continue; err = unmerge_ksm_pages(vma, vma->vm_start, vma->vm_end); @@ -1065,6 +1107,7 @@ static int unmerge_and_remove_all_rmap_items(void) mm_slot_free(mm_slot_cache, mm_slot); clear_bit(MMF_VM_MERGEABLE, &mm->flags); + clear_bit(MMF_VM_MERGE_ANY, &mm->flags); mmdrop(mm); } else spin_unlock(&ksm_mmlist_lock); @@ -2409,8 +2452,9 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page) goto no_vmas; for_each_vma(vmi, vma) { - if (!(vma->vm_flags & VM_MERGEABLE)) + if (!vma_ksm_mergeable(vma)) continue; + if (ksm_scan.address < vma->vm_start) ksm_scan.address = vma->vm_start; if (!vma->anon_vma) @@ -2495,6 +2539,7 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page) mm_slot_free(mm_slot_cache, mm_slot); clear_bit(MMF_VM_MERGEABLE, &mm->flags); + clear_bit(MMF_VM_MERGE_ANY, &mm->flags); mmap_read_unlock(mm); mmdrop(mm); } else { @@ -2579,28 +2624,12 @@ int ksm_madvise(struct vm_area_struct *vma, unsigned long start, switch (advice) { case MADV_MERGEABLE: - /* - * Be somewhat over-protective for now! - */ - if (*vm_flags & (VM_MERGEABLE | VM_SHARED | VM_MAYSHARE | - VM_PFNMAP | VM_IO | VM_DONTEXPAND | - VM_HUGETLB | VM_MIXEDMAP)) - return 0; /* just ignore the advice */ - - if (vma_is_dax(vma)) - return 0; - -#ifdef VM_SAO - if (*vm_flags & VM_SAO) - return 0; -#endif -#ifdef VM_SPARC_ADI - if (*vm_flags & VM_SPARC_ADI) + if (!vma_ksm_compatible(vma)) return 0; -#endif - if (!test_bit(MMF_VM_MERGEABLE, &mm->flags)) { - err = __ksm_enter(mm); + if (!test_bit(MMF_VM_MERGEABLE, &mm->flags) && + !test_bit(MMF_VM_MERGE_ANY, &mm->flags)) { + err = __ksm_enter(mm, MMF_VM_MERGEABLE); if (err) return err; } @@ -2626,7 +2655,7 @@ int ksm_madvise(struct vm_area_struct *vma, unsigned long start, } EXPORT_SYMBOL_GPL(ksm_madvise); -int __ksm_enter(struct mm_struct *mm) +int __ksm_enter(struct mm_struct *mm, int flag) { struct ksm_mm_slot *mm_slot; struct mm_slot *slot; @@ -2659,7 +2688,7 @@ int __ksm_enter(struct mm_struct *mm) list_add_tail(&slot->mm_node, &ksm_scan.mm_slot->slot.mm_node); spin_unlock(&ksm_mmlist_lock); - set_bit(MMF_VM_MERGEABLE, &mm->flags); + set_bit(flag, &mm->flags); mmgrab(mm); if (needs_wakeup) @@ -2668,12 +2697,17 @@ int __ksm_enter(struct mm_struct *mm) return 0; } -void __ksm_exit(struct mm_struct *mm) +void __ksm_exit(struct mm_struct *mm, int flag) { struct ksm_mm_slot *mm_slot; struct mm_slot *slot; int easy_to_free = 0; + if (!(current->flags & PF_EXITING) && + flag == MMF_VM_MERGE_ANY && + test_bit(MMF_VM_MERGE_ANY, &mm->flags)) + clear_bit(MMF_VM_MERGE_ANY, &mm->flags); + /* * This process is exiting: if it's straightforward (as is the * case when ksmd was never running), free mm_slot immediately. @@ -2700,7 +2734,7 @@ void __ksm_exit(struct mm_struct *mm) if (easy_to_free) { mm_slot_free(mm_slot_cache, mm_slot); - clear_bit(MMF_VM_MERGEABLE, &mm->flags); + clear_bit(flag, &mm->flags); mmdrop(mm); } else if (mm_slot) { mmap_write_lock(mm);