From patchwork Fri Mar 10 18:28:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Roesch X-Patchwork-Id: 13169942 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62D95C74A44 for ; Fri, 10 Mar 2023 18:31:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A7DAD6B0074; Fri, 10 Mar 2023 13:31:41 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A38966B0075; Fri, 10 Mar 2023 13:31:41 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8CCFD6B0078; Fri, 10 Mar 2023 13:31:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 7D6D96B0074 for ; Fri, 10 Mar 2023 13:31:41 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 3D7F4AAD16 for ; Fri, 10 Mar 2023 18:31:41 +0000 (UTC) X-FDA: 80553832002.22.4F2A899 Received: from 66-220-144-178.mail-mxout.facebook.com (66-220-144-178.mail-mxout.facebook.com [66.220.144.178]) by imf07.hostedemail.com (Postfix) with ESMTP id 5B3A940007 for ; Fri, 10 Mar 2023 18:31:39 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=none; dmarc=none; spf=neutral (imf07.hostedemail.com: 66.220.144.178 is neither permitted nor denied by domain of shr@devkernel.io) smtp.mailfrom=shr@devkernel.io ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678473099; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ceG105YU1L2nVQtIaOOOknC1LeJqXxsUskpgONt+wRI=; b=7tiPlAG3Sbd9UtqBZSs3Der/SQsZIm0Sauk9ewZgnUulh0uU1fok8MQKM5DiDyZ7Gygyne sa3hX8mAcEOpozS4ObA9lBT7kWOWGdKidCm7aTz+1ya+tXKjJRpXouad6PljdgG5UtlqVr MlicuJastkBVbn5/iHSDhbeYn8G13FE= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=none; dmarc=none; spf=neutral (imf07.hostedemail.com: 66.220.144.178 is neither permitted nor denied by domain of shr@devkernel.io) smtp.mailfrom=shr@devkernel.io ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678473099; a=rsa-sha256; cv=none; b=y1rO7k0ZbQpl7K9QagiVHw3eChUfE4ICQ7DxGHMo8yTRCNwW7pPcWsk2PUa+L6IwK7+/FW G+TLVKrIZ4a0a99lZwh3JrMNOcUmbF+tB4tIzW2ZcRWVXB6EOnHZDV9Ya4JglBCVhXKlML 8G50mL3zMn2DNXo+Y0mNYoMSNJsR3Pg= Received: by dev0134.prn3.facebook.com (Postfix, from userid 425415) id 2280A8D84921; Fri, 10 Mar 2023 10:28:54 -0800 (PST) From: Stefan Roesch To: kernel-team@fb.com Cc: shr@devkernel.io, linux-mm@kvack.org, riel@surriel.com, mhocko@suse.com, david@redhat.com, linux-kselftest@vger.kernel.org, linux-doc@vger.kernel.org, akpm@linux-foundation.org, hannes@cmpxchg.org, Bagas Sanjaya Subject: [PATCH v4 1/3] mm: add new api to enable ksm per process Date: Fri, 10 Mar 2023 10:28:49 -0800 Message-Id: <20230310182851.2579138-2-shr@devkernel.io> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230310182851.2579138-1-shr@devkernel.io> References: <20230310182851.2579138-1-shr@devkernel.io> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 5B3A940007 X-Stat-Signature: phgy6x3zznjdkiibry3utgnhzpqgznus X-HE-Tag: 1678473099-327653 X-HE-Meta: U2FsdGVkX1/7TmQ2GVQJcRmmsoBEDJ5DH+lrBOisrkR8UFGIyysSKtW6Vjt3pIbiJVjAUuoKjBl7LM1YQOGPE15/16908Ihp01VtxSlhlW8sb4t0sNMMQKQF+XYed35IXEA3ABD8WDiPGB5GK1A65aw6jibF/sumKKf29w/+MifqugMYGyQHdC9P0QI6bdahjRGRsnIJ7f+7TlmzfrowV3Ee4P4lVfPtEhlFVYpqss/98536n4+7UnDY/A/oC8zUBf17b0J5QXtd5v1oAZ1/7R+65TJcyPSH6Zzx5XSku09+c3up9KhLNpI46OMnDSMcfRzrEfkwdTuGvLzHgY8oozPvRxDqQI8njl5u+ZsVJ6/ll1ugpLGBmPLc180aUGBXs9fEFXgr3xGbjHzz8ToYNkK/KaT6DnERsA62VY0uduiVC1QY+nK99zoZIGcJCtjvffobHPID/3WrBsGXF0A1vNX+GhaLk7xqSC7+oigB2RWPjCzsXn34c/51lgl9u5EWsMTyHnawJURDoK4V6DGkQjDOwYOUuq0YSyMcgYM2f/ZUVOLqW6VP2zuVfckdIEWaJHv0tq1Dj9P9Mq3v91pOdX6Em4KNiL+9w4otSSXulMLd5IjoX0u/E6q31cUTVtn+GmAkF4RiPI5poihuT0U0lLzLTqTcIsIVpEmgtYsxyQv+QdOVA+FVpsnCVUIAy5FPwXUYZQv31/HfEKrhNqeESBDv2lP0adasLWrHdsmbiJJODfXVy1m3lbABiXqKbYAkyYW74sEGCPjtIarDo+TeSE+tpVrg6BeGnOzNfE6q7DzItouQ2fswMY8RXdzW0GQb/ZLwpp6DW0FVt3/ogWniujIaKYrmEJaUEZCFrwVhyaPCHTOr9B0QMwNKN/Yk/dbB1OF2ZgaxKuk+ac81MqYEUG7ZYZenqILRYy9m6mStkW62tdzbBnyHRoB9aNife9eZVx5OXEVYcq1SDne6m9B SImCgqCE lmmpJlNmNK1uKxm1mIW+zyUWCgv310q1DW3HE8BNIEZp5MsCjtva8W/d4KIHVcSIgXKZTLqExtSGh3qI1JuuxeuHRIc2ZtWkK+BAG9y90DR+EnbLRQXOR/ozYpG5NyEtG5Eg51eP0yAQmSzU5YH1V2+LfHWEBCoiK1Qzdo05hD++LquXTSZlxkHDgl3sOFt+06N6pW2N+dtoJkDkZwKF/ICPYHRerJ3xQB54PAXmXUkT1+707wkpxxhegRb53xcbHGkWmI0HRy+yiVWusj4AkjjXb6pNzGPkBeQBxNzd7AKVLCvSQ4QrxUjGqMtiYeYaZIMcFKXtCPxblg0jjD0VhpEZCPIE+ujPFLuHsLUk76rwq17tMkMCFYaCskzC9qbsafNo3QCckrczuFNs= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Patch series "mm: process/cgroup ksm support", v3. So far KSM can only be enabled by calling madvise for memory regions. To be able to use KSM for more workloads, KSM needs to have the ability to be enabled / disabled at the process / cgroup level. Use case 1: The madvise call is not available in the programming language. An example for this are programs with forked workloads using a garbage collected language without pointers. In such a language madvise cannot be made available. In addition the addresses of objects get moved around as they are garbage collected. KSM sharing needs to be enabled "from the outside" for these type of workloads. Use case 2: The same interpreter can also be used for workloads where KSM brings no benefit or even has overhead. We'd like to be able to enable KSM on a workload by workload basis. Use case 3: With the madvise call sharing opportunities are only enabled for the current process: it is a workload-local decision. A considerable number of sharing opportuniites may exist across multiple workloads or jobs. Only a higler level entity like a job scheduler or container can know for certain if its running one or more instances of a job. That job scheduler however doesn't have the necessary internal worklaod knowledge to make targeted madvise calls. Security concerns: In previous discussions security concerns have been brought up. The problem is that an individual workload does not have the knowledge about what else is running on a machine. Therefore it has to be very conservative in what memory areas can be shared or not. However, if the system is dedicated to running multiple jobs within the same security domain, its the job scheduler that has the knowledge that sharing can be safely enabled and is even desirable. Performance: Experiments with using UKSM have shown a capacity increase of around 20%. 1. New options for prctl system command This patch series adds two new options to the prctl system call. The first one allows to enable KSM at the process level and the second one to query the setting. The setting will be inherited by child processes. With the above setting, KSM can be enabled for the seed process of a cgroup and all processes in the cgroup will inherit the setting. 2. Changes to KSM processing When KSM is enabled at the process level, the KSM code will iterate over all the VMA's and enable KSM for the eligible VMA's. When forking a process that has KSM enabled, the setting will be inherited by the new child process. In addition when KSM is disabled for a process, KSM will be disabled for the VMA's where KSM has been enabled. 3. Add general_profit metric The general_profit metric of KSM is specified in the documentation, but not calculated. This adds the general profit metric to /sys/kernel/debug/mm/ksm. 4. Add more metrics to ksm_stat This adds the process profit and ksm type metric to /proc//ksm_stat. 5. Add more tests to ksm_tests This adds an option to specify the merge type to the ksm_tests. This allows to test madvise and prctl KSM. It also adds a new option to query if prctl KSM has been enabled. It adds a fork test to verify that the KSM process setting is inherited by client processes. An update to the prctl(2) manpage has been proposed at [1]. This patch (of 3): This adds a new prctl to API to enable and disable KSM on a per process basis instead of only at the VMA basis (with madvise). 1) Introduce new MMF_VM_MERGE_ANY flag This introduces the new flag MMF_VM_MERGE_ANY flag. When this flag is set, kernel samepage merging (ksm) gets enabled for all vma's of a process. 2) add flag to __ksm_enter This change adds the flag parameter to __ksm_enter. This allows to distinguish if ksm was called by prctl or madvise. 3) add flag to __ksm_exit call This adds the flag parameter to the __ksm_exit() call. This allows to distinguish if this call is for an prctl or madvise invocation. 4) invoke madvise for all vmas in scan_get_next_rmap_item If the new flag MMF_VM_MERGE_ANY has been set for a process, iterate over all the vmas and enable ksm if possible. For the vmas that can be ksm enabled this is only done once. 5) support disabling of ksm for a process This adds the ability to disable ksm for a process if ksm has been enabled for the process. 6) add new prctl option to get and set ksm for a process This adds two new options to the prctl system call - enable ksm for all vmas of a process (if the vmas support it). - query if ksm has been enabled for a process. Link: https://lkml.kernel.org/r/20230227220206.436662-1-shr@devkernel.io [1] Link: https://lkml.kernel.org/r/20230224044000.3084046-1-shr@devkernel.io Link: https://lkml.kernel.org/r/20230224044000.3084046-2-shr@devkernel.io Signed-off-by: Stefan Roesch Cc: David Hildenbrand Cc: Johannes Weiner Cc: Michal Hocko Cc: Rik van Riel Cc: Bagas Sanjaya Signed-off-by: Andrew Morton --- include/linux/ksm.h | 14 ++++-- include/linux/sched/coredump.h | 1 + include/uapi/linux/prctl.h | 2 + kernel/sys.c | 27 ++++++++++ mm/ksm.c | 90 +++++++++++++++++++++++----------- 5 files changed, 101 insertions(+), 33 deletions(-) diff --git a/include/linux/ksm.h b/include/linux/ksm.h index 7e232ba59b86..d38a05a36298 100644 --- a/include/linux/ksm.h +++ b/include/linux/ksm.h @@ -18,20 +18,24 @@ #ifdef CONFIG_KSM int ksm_madvise(struct vm_area_struct *vma, unsigned long start, unsigned long end, int advice, unsigned long *vm_flags); -int __ksm_enter(struct mm_struct *mm); -void __ksm_exit(struct mm_struct *mm); +int __ksm_enter(struct mm_struct *mm, int flag); +void __ksm_exit(struct mm_struct *mm, int flag); static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm) { + if (test_bit(MMF_VM_MERGE_ANY, &oldmm->flags)) + return __ksm_enter(mm, MMF_VM_MERGE_ANY); if (test_bit(MMF_VM_MERGEABLE, &oldmm->flags)) - return __ksm_enter(mm); + return __ksm_enter(mm, MMF_VM_MERGEABLE); return 0; } static inline void ksm_exit(struct mm_struct *mm) { - if (test_bit(MMF_VM_MERGEABLE, &mm->flags)) - __ksm_exit(mm); + if (test_bit(MMF_VM_MERGE_ANY, &mm->flags)) + __ksm_exit(mm, MMF_VM_MERGE_ANY); + else if (test_bit(MMF_VM_MERGEABLE, &mm->flags)) + __ksm_exit(mm, MMF_VM_MERGEABLE); } /* diff --git a/include/linux/sched/coredump.h b/include/linux/sched/coredump.h index 0e17ae7fbfd3..0ee96ea7a0e9 100644 --- a/include/linux/sched/coredump.h +++ b/include/linux/sched/coredump.h @@ -90,4 +90,5 @@ static inline int get_dumpable(struct mm_struct *mm) #define MMF_INIT_MASK (MMF_DUMPABLE_MASK | MMF_DUMP_FILTER_MASK |\ MMF_DISABLE_THP_MASK | MMF_HAS_MDWE_MASK) +#define MMF_VM_MERGE_ANY 29 #endif /* _LINUX_SCHED_COREDUMP_H */ diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h index 1312a137f7fb..759b3f53e53f 100644 --- a/include/uapi/linux/prctl.h +++ b/include/uapi/linux/prctl.h @@ -290,4 +290,6 @@ struct prctl_mm_map { #define PR_SET_VMA 0x53564d41 # define PR_SET_VMA_ANON_NAME 0 +#define PR_SET_MEMORY_MERGE 67 +#define PR_GET_MEMORY_MERGE 68 #endif /* _LINUX_PRCTL_H */ diff --git a/kernel/sys.c b/kernel/sys.c index 495cd87d9bf4..edc439b1cae9 100644 --- a/kernel/sys.c +++ b/kernel/sys.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include #include @@ -2661,6 +2662,32 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3, case PR_SET_VMA: error = prctl_set_vma(arg2, arg3, arg4, arg5); break; +#ifdef CONFIG_KSM + case PR_SET_MEMORY_MERGE: + if (!capable(CAP_SYS_RESOURCE)) + return -EPERM; + + if (arg2) { + if (mmap_write_lock_killable(me->mm)) + return -EINTR; + + if (!test_bit(MMF_VM_MERGE_ANY, &me->mm->flags)) + error = __ksm_enter(me->mm, MMF_VM_MERGE_ANY); + mmap_write_unlock(me->mm); + } else { + __ksm_exit(me->mm, MMF_VM_MERGE_ANY); + } + break; + case PR_GET_MEMORY_MERGE: + if (!capable(CAP_SYS_RESOURCE)) + return -EPERM; + + if (arg2 || arg3 || arg4 || arg5) + return -EINVAL; + + error = !!test_bit(MMF_VM_MERGE_ANY, &me->mm->flags); + break; +#endif default: error = -EINVAL; break; diff --git a/mm/ksm.c b/mm/ksm.c index d7bd28199f6c..b8e6e734dd69 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -534,16 +534,58 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr, return (ret & VM_FAULT_OOM) ? -ENOMEM : 0; } +static bool vma_ksm_compatible(struct vm_area_struct *vma) +{ + /* + * Be somewhat over-protective for now! + */ + if (vma->vm_flags & (VM_MERGEABLE | VM_SHARED | VM_MAYSHARE | + VM_PFNMAP | VM_IO | VM_DONTEXPAND | + VM_HUGETLB | VM_MIXEDMAP)) + return false; /* just ignore the advice */ + + if (vma_is_dax(vma)) + return false; + +#ifdef VM_SAO + if (*vm_flags & VM_SAO) + return false; +#endif +#ifdef VM_SPARC_ADI + if (*vm_flags & VM_SPARC_ADI) + return false; +#endif + + return true; +} + +static bool vma_ksm_mergeable(struct vm_area_struct *vma) +{ + if (vma->vm_flags & VM_MERGEABLE) + return true; + + if (test_bit(MMF_VM_MERGE_ANY, &vma->vm_mm->flags) && + vma_ksm_compatible(vma)) + return true; + + return false; +} + static struct vm_area_struct *find_mergeable_vma(struct mm_struct *mm, unsigned long addr) { struct vm_area_struct *vma; + if (ksm_test_exit(mm)) return NULL; + vma = vma_lookup(mm, addr); - if (!vma || !(vma->vm_flags & VM_MERGEABLE) || !vma->anon_vma) + if (!vma || !vma->anon_vma) return NULL; - return vma; + if (vma_ksm_mergeable(vma)) + return vma; + + return NULL; } static void break_cow(struct ksm_rmap_item *rmap_item) @@ -1042,7 +1084,7 @@ static int unmerge_and_remove_all_rmap_items(void) goto mm_exiting; for_each_vma(vmi, vma) { - if (!(vma->vm_flags & VM_MERGEABLE) || !vma->anon_vma) + if (!vma_ksm_mergeable(vma) || !vma->anon_vma) continue; err = unmerge_ksm_pages(vma, vma->vm_start, vma->vm_end); @@ -1065,6 +1107,7 @@ static int unmerge_and_remove_all_rmap_items(void) mm_slot_free(mm_slot_cache, mm_slot); clear_bit(MMF_VM_MERGEABLE, &mm->flags); + clear_bit(MMF_VM_MERGE_ANY, &mm->flags); mmdrop(mm); } else spin_unlock(&ksm_mmlist_lock); @@ -2409,8 +2452,9 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page) goto no_vmas; for_each_vma(vmi, vma) { - if (!(vma->vm_flags & VM_MERGEABLE)) + if (!vma_ksm_mergeable(vma)) continue; + if (ksm_scan.address < vma->vm_start) ksm_scan.address = vma->vm_start; if (!vma->anon_vma) @@ -2495,6 +2539,7 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page) mm_slot_free(mm_slot_cache, mm_slot); clear_bit(MMF_VM_MERGEABLE, &mm->flags); + clear_bit(MMF_VM_MERGE_ANY, &mm->flags); mmap_read_unlock(mm); mmdrop(mm); } else { @@ -2579,28 +2624,12 @@ int ksm_madvise(struct vm_area_struct *vma, unsigned long start, switch (advice) { case MADV_MERGEABLE: - /* - * Be somewhat over-protective for now! - */ - if (*vm_flags & (VM_MERGEABLE | VM_SHARED | VM_MAYSHARE | - VM_PFNMAP | VM_IO | VM_DONTEXPAND | - VM_HUGETLB | VM_MIXEDMAP)) - return 0; /* just ignore the advice */ - - if (vma_is_dax(vma)) - return 0; - -#ifdef VM_SAO - if (*vm_flags & VM_SAO) - return 0; -#endif -#ifdef VM_SPARC_ADI - if (*vm_flags & VM_SPARC_ADI) + if (!vma_ksm_compatible(vma)) return 0; -#endif - if (!test_bit(MMF_VM_MERGEABLE, &mm->flags)) { - err = __ksm_enter(mm); + if (!test_bit(MMF_VM_MERGEABLE, &mm->flags) && + !test_bit(MMF_VM_MERGE_ANY, &mm->flags)) { + err = __ksm_enter(mm, MMF_VM_MERGEABLE); if (err) return err; } @@ -2626,7 +2655,7 @@ int ksm_madvise(struct vm_area_struct *vma, unsigned long start, } EXPORT_SYMBOL_GPL(ksm_madvise); -int __ksm_enter(struct mm_struct *mm) +int __ksm_enter(struct mm_struct *mm, int flag) { struct ksm_mm_slot *mm_slot; struct mm_slot *slot; @@ -2659,7 +2688,7 @@ int __ksm_enter(struct mm_struct *mm) list_add_tail(&slot->mm_node, &ksm_scan.mm_slot->slot.mm_node); spin_unlock(&ksm_mmlist_lock); - set_bit(MMF_VM_MERGEABLE, &mm->flags); + set_bit(flag, &mm->flags); mmgrab(mm); if (needs_wakeup) @@ -2668,12 +2697,17 @@ int __ksm_enter(struct mm_struct *mm) return 0; } -void __ksm_exit(struct mm_struct *mm) +void __ksm_exit(struct mm_struct *mm, int flag) { struct ksm_mm_slot *mm_slot; struct mm_slot *slot; int easy_to_free = 0; + if (!(current->flags & PF_EXITING) && + flag == MMF_VM_MERGE_ANY && + test_bit(MMF_VM_MERGE_ANY, &mm->flags)) + clear_bit(MMF_VM_MERGE_ANY, &mm->flags); + /* * This process is exiting: if it's straightforward (as is the * case when ksmd was never running), free mm_slot immediately. @@ -2700,7 +2734,7 @@ void __ksm_exit(struct mm_struct *mm) if (easy_to_free) { mm_slot_free(mm_slot_cache, mm_slot); - clear_bit(MMF_VM_MERGEABLE, &mm->flags); + clear_bit(flag, &mm->flags); mmdrop(mm); } else if (mm_slot) { mmap_write_lock(mm);