From patchwork Sun Apr 10 13:54:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zach O'Keefe X-Patchwork-Id: 12808149 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35DDFC433F5 for ; Sun, 10 Apr 2022 13:55:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AF1AE6B007B; Sun, 10 Apr 2022 09:55:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A79636B007D; Sun, 10 Apr 2022 09:55:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8A4316B0080; Sun, 10 Apr 2022 09:55:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0025.hostedemail.com [216.40.44.25]) by kanga.kvack.org (Postfix) with ESMTP id 75F946B007B for ; Sun, 10 Apr 2022 09:55:09 -0400 (EDT) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 28CA3A7294 for ; Sun, 10 Apr 2022 13:55:09 +0000 (UTC) X-FDA: 79341115938.31.2F5E7DC Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) by imf22.hostedemail.com (Postfix) with ESMTP id A8868C0003 for ; Sun, 10 Apr 2022 13:55:08 +0000 (UTC) Received: by mail-pg1-f201.google.com with SMTP id c32-20020a631c60000000b0039cec64e9f1so4401715pgm.3 for ; Sun, 10 Apr 2022 06:55:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=fwnSWvm3YlxUgYWdNYj6DCq1g/Vd5grd9Eg89xs0iDE=; b=iYaPOIZvs3ha6pHpFh2GA87iFxUV/6veZMKpNxH/4NucFMkKwPYSumsQ3t4nYNauO0 8O6fJ/ArRCseUQYbJO0btu9gCT5YWzDOep4f+lql+ohH2lhiEz0PGbDg6vTbb8/MD0Tc N8h5euUfg1DTFJhbTs1eUIsX+meu5nlBoWWz+zd5p3rzvehJ0drKb3sCont/XDqNfTKL oTNWkOoq1aqUwE0yzF1Bt5S7z4CgsoXYSBPjJOAB1Z9buTQtd6MsQ/2WPvmHiW1MYclj dg3IG5l9jVKMLPJajWGkF3c7GUVIKCCss4ygBx+/XeGC2LqEcWHETIZAojgTl4tst9cd EU3g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=fwnSWvm3YlxUgYWdNYj6DCq1g/Vd5grd9Eg89xs0iDE=; b=54a8P4S6Ga4dC2wW5Wzvv3kQI77gpgfH0GrA8r3KDfwaikuIZo+F7ILzYSrYU6Z56Y bmCtbZYXM9LxQRuZovQpQikidD7sav5j52mJLlajjn/GMRoih2rMshT7B7S09zS6Bujz rt9vENuSZEDRxqwmfzy0Bxbg5XpfCGsacNi4d3F8QO3veD/u6KUmH2psLF3hJLlh28Yy +s7AmoxJhXjSOM2+xtO0ZnfTajjEyOmHBrg2b0I1L3XWKR1KCgBoSVoWjgiO9vUJS4IF GYR7wSzdtJGay0CtQRfx5xz4h47xGR5sW9+8lR1PbS2EBAgR134kWPsPwPLQeacJODS9 QaUg== X-Gm-Message-State: AOAM531adJ88IQ3Om9fJcdq0X0TlyOFzHIt2jLvC26+lCvYfchzxC6Yo aynRIkl1TmmTUIsame2d9U9nTSfz/Dpk X-Google-Smtp-Source: ABdhPJyPERbW1LIA30FZXpWAnIXSvvA6PLoWGndlB/+a57ZygXe9nOM6G+lFL9Dwc7zVTG0CfwaKsu/3elAr X-Received: from zokeefe3.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1b6]) (user=zokeefe job=sendgmr) by 2002:a17:90a:858b:b0:1c6:5bc8:781a with SMTP id m11-20020a17090a858b00b001c65bc8781amr954412pjn.0.1649598907260; Sun, 10 Apr 2022 06:55:07 -0700 (PDT) Date: Sun, 10 Apr 2022 06:54:38 -0700 In-Reply-To: <20220410135445.3897054-1-zokeefe@google.com> Message-Id: <20220410135445.3897054-6-zokeefe@google.com> Mime-Version: 1.0 References: <20220410135445.3897054-1-zokeefe@google.com> X-Mailer: git-send-email 2.35.1.1178.g4f1659d476-goog Subject: [PATCH 05/12] mm/madvise: introduce MADV_COLLAPSE sync hugepage collapse From: "Zach O'Keefe" To: Alex Shi , David Hildenbrand , David Rientjes , Matthew Wilcox , Michal Hocko , Pasha Tatashin , SeongJae Park , Song Liu , Vlastimil Babka , Yang Shi , Zi Yan , linux-mm@kvack.org Cc: Andrea Arcangeli , Andrew Morton , Arnd Bergmann , Axel Rasmussen , Chris Kennelly , Chris Zankel , Helge Deller , Hugh Dickins , Ivan Kokshaysky , "James E.J. Bottomley" , Jens Axboe , "Kirill A. Shutemov" , Matt Turner , Max Filippov , Miaohe Lin , Minchan Kim , Patrick Xia , Pavel Begunkov , Peter Xu , Thomas Bogendoerfer , "Zach O'Keefe" X-Rspam-User: X-Stat-Signature: kux63fc6zefq1fywnz4ecdy8gasoeum6 Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=iYaPOIZv; spf=pass (imf22.hostedemail.com: domain of 3u-FSYgcKCOIdSOIIJIKSSKPI.GSQPMRYb-QQOZEGO.SVK@flex--zokeefe.bounces.google.com designates 209.85.215.201 as permitted sender) smtp.mailfrom=3u-FSYgcKCOIdSOIIJIKSSKPI.GSQPMRYb-QQOZEGO.SVK@flex--zokeefe.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: A8868C0003 X-HE-Tag: 1649598908-651913 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This idea was introduced by David Rientjes[1], and the semantics and implementation were introduced and discussed in a previous PATCH RFC[2]. Introduce a new madvise mode, MADV_COLLAPSE, that allows users to request a synchronous collapse of memory at their own expense. The benefits of this approach are: * CPU is charged to the process that wants to spend the cycles for the THP * avoid unpredictable timing of khugepaged collapse Immediate users of this new functionality include: * immediately back executable text by hugepages. Current support provided by CONFIG_READ_ONLY_THP_FOR_FS may take too long on a large system. * malloc implementations that manage memory in hugepage-sized chunks, but sometimes subrelease memory back to the system in native-sized chunks via MADV_DONTNEED; zapping the pmd. Later, when the memory is hot, the implementation could madvise(MADV_COLLAPSE) to re-back the memory by THP to regain TLB performance. Allocation semantics are the same as khugepaged, and depend on (1) the active sysfs settings /sys/kernel/mm/transparent_hugepage/enabled and /sys/kernel/mm/transparent_hugepage/khugepaged/defrag, and (2) the VMA flags of the memory range being collapsed. Only privately-mapped anon memory is supported for now. [1] https://lore.kernel.org/linux-mm/d098c392-273a-36a4-1a29-59731cdf5d3d@google.com/ [2] https://lore.kernel.org/linux-mm/20220308213417.1407042-1-zokeefe@google.com/ Suggested-by: David Rientjes Signed-off-by: Zach O'Keefe Reported-by: kernel test robot Reported-by: kernel test robot --- include/linux/huge_mm.h | 12 ++ include/uapi/asm-generic/mman-common.h | 2 + mm/khugepaged.c | 151 ++++++++++++++++++++++--- mm/madvise.c | 5 + 4 files changed, 157 insertions(+), 13 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 816a9937f30e..ddad7c7af44e 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -236,6 +236,9 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud, int hugepage_madvise(struct vm_area_struct *vma, unsigned long *vm_flags, int advice); +int madvise_collapse(struct vm_area_struct *vma, + struct vm_area_struct **prev, + unsigned long start, unsigned long end); void vma_adjust_trans_huge(struct vm_area_struct *vma, unsigned long start, unsigned long end, long adjust_next); spinlock_t *__pmd_trans_huge_lock(pmd_t *pmd, struct vm_area_struct *vma); @@ -392,6 +395,15 @@ static inline int hugepage_madvise(struct vm_area_struct *vma, BUG(); return 0; } + +static inline int madvise_collapse(struct vm_area_struct *vma, + struct vm_area_struct **prev, + unsigned long start, unsigned long end) +{ + BUG(); + return 0; +} + static inline void vma_adjust_trans_huge(struct vm_area_struct *vma, unsigned long start, unsigned long end, diff --git a/include/uapi/asm-generic/mman-common.h b/include/uapi/asm-generic/mman-common.h index 6c1aa92a92e4..6ce1f1ceb432 100644 --- a/include/uapi/asm-generic/mman-common.h +++ b/include/uapi/asm-generic/mman-common.h @@ -77,6 +77,8 @@ #define MADV_DONTNEED_LOCKED 24 /* like DONTNEED, but drop locked pages too */ +#define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */ + /* compatibility flags */ #define MAP_FILE 0 diff --git a/mm/khugepaged.c b/mm/khugepaged.c index ed025dbbd7e6..c5c484b7e394 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -846,7 +846,6 @@ static inline gfp_t alloc_hugepage_khugepaged_gfpmask(void) return khugepaged_defrag() ? GFP_TRANSHUGE : GFP_TRANSHUGE_LIGHT; } -#ifdef CONFIG_NUMA static int khugepaged_find_target_node(struct collapse_control *cc) { int nid, target_node = 0, max_value = 0; @@ -872,6 +871,24 @@ static int khugepaged_find_target_node(struct collapse_control *cc) return target_node; } +static struct page *alloc_hpage(struct collapse_control *cc, gfp_t gfp, + int node) +{ + VM_BUG_ON_PAGE(cc->hpage, cc->hpage); + + cc->hpage = __alloc_pages_node(node, gfp, HPAGE_PMD_ORDER); + if (unlikely(!cc->hpage)) { + count_vm_event(THP_COLLAPSE_ALLOC_FAILED); + cc->hpage = ERR_PTR(-ENOMEM); + return NULL; + } + + prep_transhuge_page(cc->hpage); + count_vm_event(THP_COLLAPSE_ALLOC); + return cc->hpage; +} + +#ifdef CONFIG_NUMA static bool khugepaged_prealloc_page(struct page **hpage, bool *wait) { if (IS_ERR(*hpage)) { @@ -892,18 +909,7 @@ static bool khugepaged_prealloc_page(struct page **hpage, bool *wait) static struct page *khugepaged_alloc_page(struct collapse_control *cc, gfp_t gfp, int node) { - VM_BUG_ON_PAGE(cc->hpage, cc->hpage); - - cc->hpage = __alloc_pages_node(node, gfp, HPAGE_PMD_ORDER); - if (unlikely(!cc->hpage)) { - count_vm_event(THP_COLLAPSE_ALLOC_FAILED); - cc->hpage = ERR_PTR(-ENOMEM); - return NULL; - } - - prep_transhuge_page(cc->hpage); - count_vm_event(THP_COLLAPSE_ALLOC); - return cc->hpage; + return alloc_hpage(cc, gfp, node); } #else static int khugepaged_find_target_node(struct collapse_control *cc) @@ -2456,3 +2462,122 @@ void khugepaged_min_free_kbytes_update(void) set_recommended_min_free_kbytes(); mutex_unlock(&khugepaged_mutex); } + +static void madvise_collapse_cleanup_page(struct page **hpage) +{ + if (!IS_ERR(*hpage) && *hpage) + put_page(*hpage); + *hpage = NULL; +} + +int madvise_collapse_errno(enum scan_result r) +{ + switch (r) { + case SCAN_PMD_NULL: + case SCAN_ADDRESS_RANGE: + case SCAN_VMA_NULL: + case SCAN_PTE_NON_PRESENT: + case SCAN_PAGE_NULL: + /* + * Addresses in the specified range are not currently mapped, + * or are outside the AS of the process. + */ + return -ENOMEM; + case SCAN_ALLOC_HUGE_PAGE_FAIL: + case SCAN_CGROUP_CHARGE_FAIL: + /* A kernel resource was temporarily unavailable. */ + return -EAGAIN; + default: + return -EINVAL; + } +} + +int madvise_collapse(struct vm_area_struct *vma, struct vm_area_struct **prev, + unsigned long start, unsigned long end) +{ + struct collapse_control cc = { + .last_target_node = NUMA_NO_NODE, + .hpage = NULL, + .alloc_hpage = &alloc_hpage, + }; + struct mm_struct *mm = vma->vm_mm; + struct collapse_result cr; + unsigned long hstart, hend, addr; + int thps = 0, nr_hpages = 0; + + BUG_ON(vma->vm_start > start); + BUG_ON(vma->vm_end < end); + + *prev = vma; + + if (IS_ENABLED(CONFIG_SHMEM) && vma->vm_file) + return -EINVAL; + + hstart = (start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK; + hend = end & HPAGE_PMD_MASK; + nr_hpages = (hend - hstart) >> HPAGE_PMD_SHIFT; + + if (hstart >= hend || !transparent_hugepage_active(vma)) + return -EINVAL; + + mmgrab(mm); + lru_add_drain(); + + for (addr = hstart; ; ) { + mmap_assert_locked(mm); + cond_resched(); + memset(&cr, 0, sizeof(cr)); + + if (unlikely(khugepaged_test_exit(mm))) + break; + + memset(cc.node_load, 0, sizeof(cc.node_load)); + khugepaged_scan_pmd(mm, vma, addr, &cc, &cr); + if (cr.dropped_mmap_lock) + *prev = NULL; /* tell madvise we dropped mmap_lock */ + + switch (cr.result) { + /* Whitelisted set of results where continuing OK */ + case SCAN_SUCCEED: + case SCAN_PMD_MAPPED: + ++thps; + case SCAN_PMD_NULL: + case SCAN_PTE_NON_PRESENT: + case SCAN_PTE_UFFD_WP: + case SCAN_PAGE_RO: + case SCAN_LACK_REFERENCED_PAGE: + case SCAN_PAGE_NULL: + case SCAN_PAGE_COUNT: + case SCAN_PAGE_LOCK: + case SCAN_PAGE_COMPOUND: + break; + case SCAN_PAGE_LRU: + lru_add_drain_all(); + goto retry; + default: + /* Other error, exit */ + goto break_loop; + } + addr += HPAGE_PMD_SIZE; + if (addr >= hend) + break; +retry: + if (cr.dropped_mmap_lock) { + mmap_read_lock(mm); + if (hugepage_vma_revalidate(mm, addr, &vma)) + goto out; + } + madvise_collapse_cleanup_page(&cc.hpage); + } + +break_loop: + /* madvise_walk_vmas() expects us to hold mmap_lock on return */ + if (cr.dropped_mmap_lock) + mmap_read_lock(mm); +out: + mmap_assert_locked(mm); + madvise_collapse_cleanup_page(&cc.hpage); + mmdrop(mm); + + return thps == nr_hpages ? 0 : madvise_collapse_errno(cr.result); +} diff --git a/mm/madvise.c b/mm/madvise.c index ec03a76244b7..7ad53e5311cf 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -59,6 +59,7 @@ static int madvise_need_mmap_write(int behavior) case MADV_FREE: case MADV_POPULATE_READ: case MADV_POPULATE_WRITE: + case MADV_COLLAPSE: return 0; default: /* be safe, default to 1. list exceptions explicitly */ @@ -1051,6 +1052,8 @@ static int madvise_vma_behavior(struct vm_area_struct *vma, if (error) goto out; break; + case MADV_COLLAPSE: + return madvise_collapse(vma, prev, start, end); } anon_name = anon_vma_name(vma); @@ -1144,6 +1147,7 @@ madvise_behavior_valid(int behavior) #ifdef CONFIG_TRANSPARENT_HUGEPAGE case MADV_HUGEPAGE: case MADV_NOHUGEPAGE: + case MADV_COLLAPSE: #endif case MADV_DONTDUMP: case MADV_DODUMP: @@ -1333,6 +1337,7 @@ int madvise_set_anon_name(struct mm_struct *mm, unsigned long start, * MADV_NOHUGEPAGE - mark the given range as not worth being backed by * transparent huge pages so the existing pages will not be * coalesced into THP and new pages will not be allocated as THP. + * MADV_COLLAPSE - synchronously coalesce pages into new THP. * MADV_DONTDUMP - the application wants to prevent pages in the given range * from being included in its core dump. * MADV_DODUMP - cancel MADV_DONTDUMP: no longer exclude from core dump.