From patchwork Tue Mar 8 21:34:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zach O'Keefe X-Patchwork-Id: 12774391 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4F76C433EF for ; Tue, 8 Mar 2022 21:34:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4620F8D0005; Tue, 8 Mar 2022 16:34:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3E8B88D0001; Tue, 8 Mar 2022 16:34:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 266E58D0005; Tue, 8 Mar 2022 16:34:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0203.hostedemail.com [216.40.44.203]) by kanga.kvack.org (Postfix) with ESMTP id 0AEE68D0001 for ; Tue, 8 Mar 2022 16:34:51 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id B5A7A8248076 for ; Tue, 8 Mar 2022 21:34:50 +0000 (UTC) X-FDA: 79222523940.27.C4A7D4C Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) by imf09.hostedemail.com (Postfix) with ESMTP id 448EB14000C for ; Tue, 8 Mar 2022 21:34:50 +0000 (UTC) Received: by mail-pg1-f201.google.com with SMTP id 196-20020a6307cd000000b0038027886594so147148pgh.4 for ; Tue, 08 Mar 2022 13:34:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=fXjdhLBwr9IidH/BTWrBILXwTt16t8GxAPoafWem5J4=; b=cM8z1fR9THMGEKeS6ccOZ63TxYQDfcBTxksPxEZttLHuzWslbXz8J65TwAHDsubEvl xzNuF7RmQ26okNMbwGtY6xMN6AnV2fw07ktgFyFwTjxghpaM/UrhFU1gYAOGJvFf6qmX A6/516fod2kjRzsjYk+QgK6CyXTHouHMpA+s1FYiwdZh8sqG8UGZpx2TIla9iP9JDKKL 4RNbFf7U105droEG3x8p+5IkRJPMW3ckhDMTlQc9zMvjfFslh0WmVMoukIjtv0TKwh98 3Pn/T/PIn8zJk6xvEwB7tXTBqzwVtsiKShscPp5MrAH7CLDObNEscFAdJp0AGx6YZ5Uw HOJQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=fXjdhLBwr9IidH/BTWrBILXwTt16t8GxAPoafWem5J4=; b=xEQ1FJX84iYOe2IwyCy5pitivw5JCIQCT5piePQHQidqdhUYeGz582ILuZtU4bGpEM 9/6ApS4SL2Qx+hI1MnjNBxbx1UtmGB/VPt2LikFOwfzhPvEamnNUkJxo7HypvrOUDsjm 7sG9bS26Z6agMpIcBnkD5FPqNqAVR1tqHpnAhW9JZMHmfvMxBMq1X9nrPoo0SOSJ8Mqn zyrZK6g80kL/G6/p/ezh/nFGD86QzbXe0ZU5eyqgloBuOzR5K8w0tbudMpg33L2LkEkh 04xSUcTnHZaj+xZxL2x6+dF1uH9OynBSjd3tk0t/W5MSTWw11IiosJwf2GRQW5MtjReB 4hpQ== X-Gm-Message-State: AOAM530d/5BJ+4LWOkxHVqMsrU7BcFunazoQiUYD73Fzls/Ler7tDyHK doccNtu90rTeNOLPcPXzTujWbqE3aPnv X-Google-Smtp-Source: ABdhPJyb2GCmaN+3MS9TXdsAQpt6HiYRGnPfPv43EUDvi09kgvKBAmwu+92j3jxyCOKTpD+v8WVXbzHAptMz X-Received: from zokeefe3.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1b6]) (user=zokeefe job=sendgmr) by 2002:a17:902:f54f:b0:152:7e6:c32d with SMTP id h15-20020a170902f54f00b0015207e6c32dmr3482493plf.125.1646775289141; Tue, 08 Mar 2022 13:34:49 -0800 (PST) Date: Tue, 8 Mar 2022 13:34:05 -0800 In-Reply-To: <20220308213417.1407042-1-zokeefe@google.com> Message-Id: <20220308213417.1407042-3-zokeefe@google.com> Mime-Version: 1.0 References: <20220308213417.1407042-1-zokeefe@google.com> X-Mailer: git-send-email 2.35.1.616.g0bdcbb4464-goog Subject: [RFC PATCH 02/14] mm/khugepaged: add struct collapse_control From: "Zach O'Keefe" To: Alex Shi , David Hildenbrand , David Rientjes , Michal Hocko , Pasha Tatashin , SeongJae Park , Song Liu , Vlastimil Babka , Zi Yan , linux-mm@kvack.org Cc: Andrea Arcangeli , Andrew Morton , Arnd Bergmann , Axel Rasmussen , Chris Kennelly , Chris Zankel , Helge Deller , Hugh Dickins , Ivan Kokshaysky , "James E.J. Bottomley" , Jens Axboe , "Kirill A. Shutemov" , Matthew Wilcox , Matt Turner , Max Filippov , Miaohe Lin , Minchan Kim , Patrick Xia , Pavel Begunkov , Peter Xu , Richard Henderson , Thomas Bogendoerfer , Yang Shi , "Zach O'Keefe" X-Rspamd-Server: rspam10 X-Rspam-User: X-Stat-Signature: to5aneonxpkxd6xxy7ir6ofaxj6w5pxx Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=cM8z1fR9; spf=pass (imf09.hostedemail.com: domain of 3-csnYgcKCEg9yuoopoqyyqvo.mywvsx47-wwu5kmu.y1q@flex--zokeefe.bounces.google.com designates 209.85.215.201 as permitted sender) smtp.mailfrom=3-csnYgcKCEg9yuoopoqyyqvo.mywvsx47-wwu5kmu.y1q@flex--zokeefe.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Queue-Id: 448EB14000C X-HE-Tag: 1646775290-632719 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Modularize huge page collapse by introducing struct collapse_control. This structure serves to describe the properties of the requested collapse, as well as serve as a local scratch pad to use during the collapse itself. Later in the series when we introduce the madvise collapse context, we will want to be able to ignore khugepaged_max_ptes_[none|swap|shared] in said context, and so is included here as a property of the requested collapse. Signed-off-by: Zach O'Keefe --- mm/khugepaged.c | 120 ++++++++++++++++++++++++++++++------------------ 1 file changed, 76 insertions(+), 44 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index a4e5eaf3eb01..36fc0099c445 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -85,6 +85,24 @@ static struct kmem_cache *mm_slot_cache __read_mostly; #define MAX_PTE_MAPPED_THP 8 +struct collapse_control { + /* Respect khugepaged_max_ptes_[none|swap|shared] */ + bool enforce_pte_scan_limits; + + /* Num pages scanned per node */ + int node_load[MAX_NUMNODES]; + + /* Last target selected in khugepaged_find_target_node() for this scan */ + int last_target_node; +}; + +static void collapse_control_init(struct collapse_control *cc, + bool enforce_pte_scan_limits) +{ + cc->enforce_pte_scan_limits = enforce_pte_scan_limits; + cc->last_target_node = NUMA_NO_NODE; +} + /** * struct mm_slot - hash lookup from mm to mm_slot * @hash: hash collision list @@ -601,6 +619,7 @@ static bool is_refcount_suitable(struct page *page) static int __collapse_huge_page_isolate(struct vm_area_struct *vma, unsigned long address, pte_t *pte, + bool enforce_pte_scan_limits, struct list_head *compound_pagelist) { struct page *page = NULL; @@ -614,7 +633,8 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, if (pte_none(pteval) || (pte_present(pteval) && is_zero_pfn(pte_pfn(pteval)))) { if (!userfaultfd_armed(vma) && - ++none_or_zero <= khugepaged_max_ptes_none) { + (++none_or_zero <= khugepaged_max_ptes_none || + !enforce_pte_scan_limits)) { continue; } else { result = SCAN_EXCEED_NONE_PTE; @@ -634,8 +654,8 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, VM_BUG_ON_PAGE(!PageAnon(page), page); - if (page_mapcount(page) > 1 && - ++shared > khugepaged_max_ptes_shared) { + if (page_mapcount(page) > 1 && enforce_pte_scan_limits && + ++shared > khugepaged_max_ptes_shared) { result = SCAN_EXCEED_SHARED_PTE; count_vm_event(THP_SCAN_EXCEED_SHARED_PTE); goto out; @@ -785,9 +805,7 @@ static void khugepaged_alloc_sleep(void) remove_wait_queue(&khugepaged_wait, &wait); } -static int khugepaged_node_load[MAX_NUMNODES]; - -static bool khugepaged_scan_abort(int nid) +static bool khugepaged_scan_abort(int nid, struct collapse_control *cc) { int i; @@ -799,11 +817,11 @@ static bool khugepaged_scan_abort(int nid) return false; /* If there is a count for this node already, it must be acceptable */ - if (khugepaged_node_load[nid]) + if (cc->node_load[nid]) return false; for (i = 0; i < MAX_NUMNODES; i++) { - if (!khugepaged_node_load[i]) + if (!cc->node_load[i]) continue; if (node_distance(nid, i) > node_reclaim_distance) return true; @@ -818,28 +836,28 @@ static inline gfp_t alloc_hugepage_khugepaged_gfpmask(void) } #ifdef CONFIG_NUMA -static int khugepaged_find_target_node(void) +static int khugepaged_find_target_node(struct collapse_control *cc) { - static int last_khugepaged_target_node = NUMA_NO_NODE; int nid, target_node = 0, max_value = 0; /* find first node with max normal pages hit */ for (nid = 0; nid < MAX_NUMNODES; nid++) - if (khugepaged_node_load[nid] > max_value) { - max_value = khugepaged_node_load[nid]; + if (cc->node_load[nid] > max_value) { + max_value = cc->node_load[nid]; target_node = nid; } /* do some balance if several nodes have the same hit record */ - if (target_node <= last_khugepaged_target_node) - for (nid = last_khugepaged_target_node + 1; nid < MAX_NUMNODES; - nid++) - if (max_value == khugepaged_node_load[nid]) { + if (target_node <= cc->last_target_node) + for (nid = cc->last_target_node + 1; nid < MAX_NUMNODES; + nid++) { + if (max_value == cc->node_load[nid]) { target_node = nid; break; } + } - last_khugepaged_target_node = target_node; + cc->last_target_node = target_node; return target_node; } @@ -877,7 +895,7 @@ khugepaged_alloc_page(struct page **hpage, gfp_t gfp, int node) return *hpage; } #else -static int khugepaged_find_target_node(void) +static int khugepaged_find_target_node(struct collapse_control *cc) { return 0; } @@ -1043,7 +1061,8 @@ static bool __collapse_huge_page_swapin(struct mm_struct *mm, static void collapse_huge_page(struct mm_struct *mm, unsigned long address, struct page **hpage, - int node, int referenced, int unmapped) + int node, int referenced, int unmapped, + int enforce_pte_scan_limits) { LIST_HEAD(compound_pagelist); pmd_t *pmd, _pmd; @@ -1141,7 +1160,7 @@ static void collapse_huge_page(struct mm_struct *mm, spin_lock(pte_ptl); isolated = __collapse_huge_page_isolate(vma, address, pte, - &compound_pagelist); + enforce_pte_scan_limits, &compound_pagelist); spin_unlock(pte_ptl); if (unlikely(!isolated)) { @@ -1206,7 +1225,8 @@ static void collapse_huge_page(struct mm_struct *mm, static int khugepaged_scan_pmd(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long address, - struct page **hpage) + struct page **hpage, + struct collapse_control *cc) { pmd_t *pmd; pte_t *pte, *_pte; @@ -1226,13 +1246,14 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, goto out; } - memset(khugepaged_node_load, 0, sizeof(khugepaged_node_load)); + memset(cc->node_load, 0, sizeof(cc->node_load)); pte = pte_offset_map_lock(mm, pmd, address, &ptl); for (_address = address, _pte = pte; _pte < pte+HPAGE_PMD_NR; _pte++, _address += PAGE_SIZE) { pte_t pteval = *_pte; if (is_swap_pte(pteval)) { - if (++unmapped <= khugepaged_max_ptes_swap) { + if (++unmapped <= khugepaged_max_ptes_swap || + !cc->enforce_pte_scan_limits) { /* * Always be strict with uffd-wp * enabled swap entries. Please see @@ -1251,7 +1272,8 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, } if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) { if (!userfaultfd_armed(vma) && - ++none_or_zero <= khugepaged_max_ptes_none) { + (++none_or_zero <= khugepaged_max_ptes_none || + !cc->enforce_pte_scan_limits)) { continue; } else { result = SCAN_EXCEED_NONE_PTE; @@ -1282,7 +1304,8 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, } if (page_mapcount(page) > 1 && - ++shared > khugepaged_max_ptes_shared) { + ++shared > khugepaged_max_ptes_shared && + cc->enforce_pte_scan_limits) { result = SCAN_EXCEED_SHARED_PTE; count_vm_event(THP_SCAN_EXCEED_SHARED_PTE); goto out_unmap; @@ -1292,16 +1315,16 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, /* * Record which node the original page is from and save this - * information to khugepaged_node_load[]. + * information to cc->node_load[]. * Khugepaged will allocate hugepage from the node has the max * hit record. */ node = page_to_nid(page); - if (khugepaged_scan_abort(node)) { + if (khugepaged_scan_abort(node, cc)) { result = SCAN_SCAN_ABORT; goto out_unmap; } - khugepaged_node_load[node]++; + cc->node_load[node]++; if (!PageLRU(page)) { result = SCAN_PAGE_LRU; goto out_unmap; @@ -1352,10 +1375,11 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, out_unmap: pte_unmap_unlock(pte, ptl); if (ret) { - node = khugepaged_find_target_node(); + node = khugepaged_find_target_node(cc); /* collapse_huge_page will return with the mmap_lock released */ collapse_huge_page(mm, address, hpage, node, - referenced, unmapped); + referenced, unmapped, + cc->enforce_pte_scan_limits); } out: trace_mm_khugepaged_scan_pmd(mm, page, writable, referenced, @@ -1992,7 +2016,8 @@ static void collapse_file(struct mm_struct *mm, } static void khugepaged_scan_file(struct mm_struct *mm, - struct file *file, pgoff_t start, struct page **hpage) + struct file *file, pgoff_t start, struct page **hpage, + struct collapse_control *cc) { struct page *page = NULL; struct address_space *mapping = file->f_mapping; @@ -2003,14 +2028,15 @@ static void khugepaged_scan_file(struct mm_struct *mm, present = 0; swap = 0; - memset(khugepaged_node_load, 0, sizeof(khugepaged_node_load)); + memset(cc->node_load, 0, sizeof(cc->node_load)); rcu_read_lock(); xas_for_each(&xas, page, start + HPAGE_PMD_NR - 1) { if (xas_retry(&xas, page)) continue; if (xa_is_value(page)) { - if (++swap > khugepaged_max_ptes_swap) { + if (cc->enforce_pte_scan_limits && + ++swap > khugepaged_max_ptes_swap) { result = SCAN_EXCEED_SWAP_PTE; count_vm_event(THP_SCAN_EXCEED_SWAP_PTE); break; @@ -2028,11 +2054,11 @@ static void khugepaged_scan_file(struct mm_struct *mm, } node = page_to_nid(page); - if (khugepaged_scan_abort(node)) { + if (khugepaged_scan_abort(node, cc)) { result = SCAN_SCAN_ABORT; break; } - khugepaged_node_load[node]++; + cc->node_load[node]++; if (!PageLRU(page)) { result = SCAN_PAGE_LRU; @@ -2061,11 +2087,12 @@ static void khugepaged_scan_file(struct mm_struct *mm, rcu_read_unlock(); if (result == SCAN_SUCCEED) { - if (present < HPAGE_PMD_NR - khugepaged_max_ptes_none) { + if (present < HPAGE_PMD_NR - khugepaged_max_ptes_none && + cc->enforce_pte_scan_limits) { result = SCAN_EXCEED_NONE_PTE; count_vm_event(THP_SCAN_EXCEED_NONE_PTE); } else { - node = khugepaged_find_target_node(); + node = khugepaged_find_target_node(cc); collapse_file(mm, file, start, hpage, node); } } @@ -2074,7 +2101,8 @@ static void khugepaged_scan_file(struct mm_struct *mm, } #else static void khugepaged_scan_file(struct mm_struct *mm, - struct file *file, pgoff_t start, struct page **hpage) + struct file *file, pgoff_t start, struct page **hpage, + struct collapse_control *cc) { BUILD_BUG(); } @@ -2085,7 +2113,8 @@ static void khugepaged_collapse_pte_mapped_thps(struct mm_slot *mm_slot) #endif static unsigned int khugepaged_scan_mm_slot(unsigned int pages, - struct page **hpage) + struct page **hpage, + struct collapse_control *cc) __releases(&khugepaged_mm_lock) __acquires(&khugepaged_mm_lock) { @@ -2161,12 +2190,12 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, mmap_read_unlock(mm); ret = 1; - khugepaged_scan_file(mm, file, pgoff, hpage); + khugepaged_scan_file(mm, file, pgoff, hpage, cc); fput(file); } else { ret = khugepaged_scan_pmd(mm, vma, khugepaged_scan.address, - hpage); + hpage, cc); } /* move to next address */ khugepaged_scan.address += HPAGE_PMD_SIZE; @@ -2222,7 +2251,7 @@ static int khugepaged_wait_event(void) kthread_should_stop(); } -static void khugepaged_do_scan(void) +static void khugepaged_do_scan(struct collapse_control *cc) { struct page *hpage = NULL; unsigned int progress = 0, pass_through_head = 0; @@ -2246,7 +2275,7 @@ static void khugepaged_do_scan(void) if (khugepaged_has_work() && pass_through_head < 2) progress += khugepaged_scan_mm_slot(pages - progress, - &hpage); + &hpage, cc); else progress = pages; spin_unlock(&khugepaged_mm_lock); @@ -2285,12 +2314,15 @@ static void khugepaged_wait_work(void) static int khugepaged(void *none) { struct mm_slot *mm_slot; + struct collapse_control cc; + + collapse_control_init(&cc, /* enforce_pte_scan_limits= */ 1); set_freezable(); set_user_nice(current, MAX_NICE); while (!kthread_should_stop()) { - khugepaged_do_scan(); + khugepaged_do_scan(&cc); khugepaged_wait_work(); }