From patchwork Mon May 2 18:17:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zach O'Keefe X-Patchwork-Id: 12834609 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F07E4C433FE for ; Mon, 2 May 2022 18:17:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 80AE46B007B; Mon, 2 May 2022 14:17:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 795266B007D; Mon, 2 May 2022 14:17:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5E6826B007E; Mon, 2 May 2022 14:17:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 4C4766B007B for ; Mon, 2 May 2022 14:17:38 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 24D47212C2 for ; Mon, 2 May 2022 18:17:38 +0000 (UTC) X-FDA: 79421610996.16.525877B Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) by imf23.hostedemail.com (Postfix) with ESMTP id DF95014007F for ; Mon, 2 May 2022 18:17:28 +0000 (UTC) Received: by mail-pj1-f74.google.com with SMTP id u1-20020a17090a2b8100b001d9325a862fso6682660pjd.6 for ; Mon, 02 May 2022 11:17:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Mc0jQtxfHOk2MMk4Pq3M5BxxvHSx5u4cNJ3auov5auk=; b=VKbjOcB2dOT6nDv43YVrPaUddUJZTNq9BVbVT4vaXZItPTrT4NOkmR/8hJ6tWioWap 7LC5lU8O8+t6FLtnaIfXuLp96TlU+rTfxwr0GlvCncMalgeJt/e1qWNTU7Cr8GL2gDJM ezlKlDSTS9ypI/L4Dt6ZPvnnEU3y7QGoIMuEUr4yaKH0bubov1MkyqOwfbhFQblfkwGQ MPNtrKqTuqh5HRgDGPj0Ra+GRy6pwW9Le3mYCmHKJKAkRBxXX23UjzBKE9uF8TByxvhY TMr1MkShklHEK8bTi2wAwClcdlb1u7qSrIGz28OKo8rtLJYvXmA4w/QNTmxOJyF4/gAE krmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Mc0jQtxfHOk2MMk4Pq3M5BxxvHSx5u4cNJ3auov5auk=; b=hVhk301zuMmlgpwwaztxt9WiKHp7xygfM+NNVxHxSSDdz4xPV7ZKw2GrADNo6+oMzE FTLEBGbA9Yr57EbLZojouY0EqDPW4729PadPKKKglceLGu+eE50Oh/QlJH6wF3V1H3zk zMB2D6izSbxSKwcPYmUAK86kyrt4L4lUPuWoJUw5vIzehqCfGWexbGbQqSB/2g/t2ZQl p1q2gvWN5c6tj+yK+ItBIkLTGi3ArUoUfOVuOK5mx8VeWZ4MdrGVK76KxrHElDIMn+/F g8r0HoILRKFa92vXOAtwon1a/SgMMxPfI7jhqqu6sMbTDkkMxp3aN494iPquVPTXu/hZ kZJA== X-Gm-Message-State: AOAM5317T4k02x1OHyeFuxk9Uj8SVCBYJViMzIGQaGvR0u1g6VJ71qXX GLMq9+Fxvh0A7bn/uCqcWF9B4y4hEoXV X-Google-Smtp-Source: ABdhPJy+n64A1NiJZsFj1Qu8Gs4GxbDyC7bZUW76Un87oKbkyh2zaIp8hYq9Iznlxvy0Ka+5KBQq/5CAq6YQ X-Received: from zokeefe3.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1b6]) (user=zokeefe job=sendgmr) by 2002:a17:90a:e510:b0:1d9:ee23:9fa1 with SMTP id t16-20020a17090ae51000b001d9ee239fa1mr88442pjy.0.1651515455636; Mon, 02 May 2022 11:17:35 -0700 (PDT) Date: Mon, 2 May 2022 11:17:06 -0700 In-Reply-To: <20220502181714.3483177-1-zokeefe@google.com> Message-Id: <20220502181714.3483177-6-zokeefe@google.com> Mime-Version: 1.0 References: <20220502181714.3483177-1-zokeefe@google.com> X-Mailer: git-send-email 2.36.0.464.gb9c8b46e94-goog Subject: [PATCH v4 05/13] mm/khugepaged: pipe enum scan_result codes back to callers From: "Zach O'Keefe" To: Alex Shi , David Hildenbrand , David Rientjes , Matthew Wilcox , Michal Hocko , Pasha Tatashin , Peter Xu , SeongJae Park , Song Liu , Vlastimil Babka , Yang Shi , Zi Yan , linux-mm@kvack.org Cc: Andrea Arcangeli , Andrew Morton , Arnd Bergmann , Axel Rasmussen , Chris Kennelly , Chris Zankel , Helge Deller , Hugh Dickins , Ivan Kokshaysky , "James E.J. Bottomley" , Jens Axboe , "Kirill A. Shutemov" , Matt Turner , Max Filippov , Miaohe Lin , Minchan Kim , Patrick Xia , Pavel Begunkov , Thomas Bogendoerfer , "Zach O'Keefe" X-Stat-Signature: th4joauyoch4z7wbs5yrokmjddbeqgpf Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=VKbjOcB2; spf=pass (imf23.hostedemail.com: domain of 3PyBwYgcKCFoRGC66768GG8D6.4GEDAFMP-EECN24C.GJ8@flex--zokeefe.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=3PyBwYgcKCFoRGC66768GG8D6.4GEDAFMP-EECN24C.GJ8@flex--zokeefe.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: DF95014007F X-HE-Tag: 1651515448-215181 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Pipe enum scan_result codes back through return values of functions downstream of khugepaged_scan_file() and khugepaged_scan_pmd() to inform callers if the operation was successful, and if not, why. Since khugepaged_scan_pmd()'s return value already has a specific meaning (whether mmap_lock was unlocked or not), add a bool* argument to khugepaged_scan_pmd() to retrieve this information. Change khugepaged to take action based on the return values of khugepaged_scan_file() and khugepaged_scan_pmd() instead of acting deep within the collapsing functions themselves. Signed-off-by: Zach O'Keefe --- mm/khugepaged.c | 85 +++++++++++++++++++++++++++---------------------- 1 file changed, 47 insertions(+), 38 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 755c40fe87d2..986344a04165 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -732,13 +732,13 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, result = SCAN_SUCCEED; trace_mm_collapse_huge_page_isolate(page, none_or_zero, referenced, writable, result); - return 1; + return SCAN_SUCCEED; } out: release_pte_pages(pte, _pte, compound_pagelist); trace_mm_collapse_huge_page_isolate(page, none_or_zero, referenced, writable, result); - return 0; + return result; } static void __collapse_huge_page_copy(pte_t *pte, struct page *page, @@ -1096,9 +1096,9 @@ static int alloc_charge_hpage(struct mm_struct *mm, struct collapse_control *cc) return SCAN_SUCCEED; } -static void collapse_huge_page(struct mm_struct *mm, unsigned long address, - int referenced, int unmapped, - struct collapse_control *cc) +static int collapse_huge_page(struct mm_struct *mm, unsigned long address, + int referenced, int unmapped, + struct collapse_control *cc) { LIST_HEAD(compound_pagelist); pmd_t *pmd, _pmd; @@ -1106,7 +1106,7 @@ static void collapse_huge_page(struct mm_struct *mm, unsigned long address, pgtable_t pgtable; struct page *new_page; spinlock_t *pmd_ptl, *pte_ptl; - int isolated = 0, result = 0; + int result = SCAN_FAIL; struct vm_area_struct *vma; struct mmu_notifier_range range; @@ -1186,11 +1186,11 @@ static void collapse_huge_page(struct mm_struct *mm, unsigned long address, mmu_notifier_invalidate_range_end(&range); spin_lock(pte_ptl); - isolated = __collapse_huge_page_isolate(vma, address, pte, - &compound_pagelist); + result = __collapse_huge_page_isolate(vma, address, pte, + &compound_pagelist); spin_unlock(pte_ptl); - if (unlikely(!isolated)) { + if (unlikely(result != SCAN_SUCCEED)) { pte_unmap(pte); spin_lock(pmd_ptl); BUG_ON(!pmd_none(*pmd)); @@ -1238,25 +1238,23 @@ static void collapse_huge_page(struct mm_struct *mm, unsigned long address, cc->hpage = NULL; - khugepaged_pages_collapsed++; result = SCAN_SUCCEED; out_up_write: mmap_write_unlock(mm); out_nolock: if (!IS_ERR_OR_NULL(cc->hpage)) mem_cgroup_uncharge(page_folio(cc->hpage)); - trace_mm_collapse_huge_page(mm, isolated, result); - return; + trace_mm_collapse_huge_page(mm, result == SCAN_SUCCEED, result); + return result; } -static int khugepaged_scan_pmd(struct mm_struct *mm, - struct vm_area_struct *vma, - unsigned long address, +static int khugepaged_scan_pmd(struct mm_struct *mm, struct vm_area_struct *vma, + unsigned long address, bool *mmap_locked, struct collapse_control *cc) { pmd_t *pmd; pte_t *pte, *_pte; - int ret = 0, result = 0, referenced = 0; + int result = SCAN_FAIL, referenced = 0; int none_or_zero = 0, shared = 0; struct page *page = NULL; unsigned long _address; @@ -1266,6 +1264,8 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, VM_BUG_ON(address & ~HPAGE_PMD_MASK); + *mmap_locked = true; + result = find_pmd_or_thp_or_none(mm, address, &pmd); if (result != SCAN_SUCCEED) goto out; @@ -1391,18 +1391,22 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, result = SCAN_LACK_REFERENCED_PAGE; } else { result = SCAN_SUCCEED; - ret = 1; } out_unmap: pte_unmap_unlock(pte, ptl); - if (ret) { - /* collapse_huge_page will return with the mmap_lock released */ - collapse_huge_page(mm, address, referenced, unmapped, cc); + if (result == SCAN_SUCCEED) { + /* + * collapse_huge_page() will return with the mmap_lock released + * - so let the caller know mmap_lock was dropped + */ + *mmap_locked = false; + result = collapse_huge_page(mm, address, referenced, + unmapped, cc); } out: trace_mm_khugepaged_scan_pmd(mm, page, writable, referenced, none_or_zero, result, unmapped); - return ret; + return result; } static void collect_mm_slot(struct mm_slot *mm_slot) @@ -1679,8 +1683,8 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff) * + restore gaps in the page cache; * + unlock and free huge page; */ -static void collapse_file(struct mm_struct *mm, struct file *file, - pgoff_t start, struct collapse_control *cc) +static int collapse_file(struct mm_struct *mm, struct file *file, + pgoff_t start, struct collapse_control *cc) { struct address_space *mapping = file->f_mapping; struct page *new_page; @@ -1982,8 +1986,6 @@ static void collapse_file(struct mm_struct *mm, struct file *file, */ retract_page_tables(mapping, start); cc->hpage = NULL; - - khugepaged_pages_collapsed++; } else { struct page *page; @@ -2031,11 +2033,12 @@ static void collapse_file(struct mm_struct *mm, struct file *file, if (!IS_ERR_OR_NULL(cc->hpage)) mem_cgroup_uncharge(page_folio(cc->hpage)); /* TODO: tracepoints */ + return result; } -static void khugepaged_scan_file(struct mm_struct *mm, - struct file *file, pgoff_t start, - struct collapse_control *cc) +static int khugepaged_scan_file(struct mm_struct *mm, + struct file *file, pgoff_t start, + struct collapse_control *cc) { struct page *page = NULL; struct address_space *mapping = file->f_mapping; @@ -2108,16 +2111,16 @@ static void khugepaged_scan_file(struct mm_struct *mm, result = SCAN_EXCEED_NONE_PTE; count_vm_event(THP_SCAN_EXCEED_NONE_PTE); } else { - collapse_file(mm, file, start, cc); + result = collapse_file(mm, file, start, cc); } } /* TODO: tracepoints */ + return result; } #else -static void khugepaged_scan_file(struct mm_struct *mm, - struct file *file, pgoff_t start, - struct collapse_control *cc) +static int khugepaged_scan_file(struct mm_struct *mm, struct file *file, pgoff_t start, + struct collapse_control *cc) { BUILD_BUG(); } @@ -2189,7 +2192,9 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, goto skip; while (khugepaged_scan.address < hend) { - int ret; + int result; + bool mmap_locked; + cond_resched(); if (unlikely(khugepaged_test_exit(mm))) goto breakouterloop; @@ -2203,17 +2208,21 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, khugepaged_scan.address); mmap_read_unlock(mm); - ret = 1; - khugepaged_scan_file(mm, file, pgoff, cc); + mmap_locked = false; + result = khugepaged_scan_file(mm, file, pgoff, + cc); fput(file); } else { - ret = khugepaged_scan_pmd(mm, vma, - khugepaged_scan.address, cc); + result = khugepaged_scan_pmd(mm, vma, + khugepaged_scan.address, + &mmap_locked, cc); } + if (result == SCAN_SUCCEED) + ++khugepaged_pages_collapsed; /* move to next address */ khugepaged_scan.address += HPAGE_PMD_SIZE; progress += HPAGE_PMD_NR; - if (ret) + if (!mmap_locked) /* we released mmap_lock so break loop */ goto breakouterloop_mmap_lock; if (progress >= pages)