From patchwork Wed May 4 21:44:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zach O'Keefe X-Patchwork-Id: 12838666 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 835D1C433EF for ; Wed, 4 May 2022 21:45:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 130AA6B0078; Wed, 4 May 2022 17:45:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 107E26B007B; Wed, 4 May 2022 17:45:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F11BB6B007D; Wed, 4 May 2022 17:45:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id E2B1C6B0078 for ; Wed, 4 May 2022 17:45:10 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id C87AC869 for ; Wed, 4 May 2022 21:45:10 +0000 (UTC) X-FDA: 79429391580.17.DC3342F Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) by imf02.hostedemail.com (Postfix) with ESMTP id 012FA8009B for ; Wed, 4 May 2022 21:45:04 +0000 (UTC) Received: by mail-pj1-f73.google.com with SMTP id g11-20020a17090a640b00b001dca0c276e7so1248550pjj.4 for ; Wed, 04 May 2022 14:45:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=0CryALte+9KIobVi1eR/iam4hpx8sIVk2MsWB6ofw6A=; b=kHHaX5iS51swqhtsgYYWmXI3ctsZ0qm/Cx90N8hSAnNFvIWPIsWS8MZWJb/3LsoveS GDdE7EZjyAIg4LDYS5IPMFLpksCZXOCCneI/Hwytlz3oJ6nSZoDM3/FDt7RpB8S0jEg/ 6OkSsyJ3ltpQkrU3yaWd7hiJmuMeLl6CQ2vXeTUpiNbR+pURcmGTUM2/0MPeFUgvEoKr n6MHzLUEsaifPbidZrMzT5TsfNiQtLPmg01DKkJEk6yuL7GrbH8bkxIVPDA7f/03onV6 L7Z2XM8lcpw2NaAhmCIFcV9TsWv1fzFOvHzTOOjCNjVLyO5CUKG6JxHKUaIIJakD3KUr 0MHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=0CryALte+9KIobVi1eR/iam4hpx8sIVk2MsWB6ofw6A=; b=Bv4eF+4gKRBaCgwT6yamIHqOMvWFf+ax1AI+r0Zz6dOWN8nlDYOa1nLo816lqeZjLj U4K8q0YddNb2sw1dh8kUkTAAhPQBrRGgvje/7w5IHDv2Gws5ZmR5EGUcX0swhfvbj0pk LqEg6Om4sL9HC1nrRXX17NQnom8uM67YRPh96R3G7rvLMQlSAlPZPIO7VygLAhJ4+rXS adJ1gfmXJv1kiTdbc0VZr91n5MyXGWdgHakodLyeJME8eRRvtOFWCjIhUJUhn/86oIRE 4tZAjwDybg8zs/eFYU/0+cdYGBVplJBzjnjEXqy4f3sKL9hsl0/8R3IL+AZ2486pLPa+ HauA== X-Gm-Message-State: AOAM5338oelw1ZW0XkTGLAKyb3DvpYqKLNUmVR+WIeFZbeZVFpOVYabX 8IfLdkglVVde3K0eglweYALegjDiYzbB X-Google-Smtp-Source: ABdhPJyG/ppqIs1noVxxijfvSai2/ztzd/URaiLGt9tpytXKm6220snu91SFq5HVWe992QsVrtEOB4d1z9Oh X-Received: from zokeefe3.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1b6]) (user=zokeefe job=sendgmr) by 2002:a05:6a00:c8b:b0:50f:d589:e7b8 with SMTP id a11-20020a056a000c8b00b0050fd589e7b8mr6683408pfv.42.1651700709115; Wed, 04 May 2022 14:45:09 -0700 (PDT) Date: Wed, 4 May 2022 14:44:29 -0700 In-Reply-To: <20220504214437.2850685-1-zokeefe@google.com> Message-Id: <20220504214437.2850685-6-zokeefe@google.com> Mime-Version: 1.0 References: <20220504214437.2850685-1-zokeefe@google.com> X-Mailer: git-send-email 2.36.0.464.gb9c8b46e94-goog Subject: [PATCH v5 05/13] mm/khugepaged: pipe enum scan_result codes back to callers From: "Zach O'Keefe" To: Alex Shi , David Hildenbrand , David Rientjes , Matthew Wilcox , Michal Hocko , Pasha Tatashin , Peter Xu , SeongJae Park , Song Liu , Vlastimil Babka , Yang Shi , Zi Yan , linux-mm@kvack.org Cc: Andrea Arcangeli , Andrew Morton , Arnd Bergmann , Axel Rasmussen , Chris Kennelly , Chris Zankel , Helge Deller , Hugh Dickins , Ivan Kokshaysky , "James E.J. Bottomley" , Jens Axboe , "Kirill A. Shutemov" , Matt Turner , Max Filippov , Miaohe Lin , Minchan Kim , Patrick Xia , Pavel Begunkov , Thomas Bogendoerfer , "Zach O'Keefe" Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=kHHaX5iS; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf02.hostedemail.com: domain of 35fNyYgcKCLIrgcWWXWYggYdW.Ugedafmp-eecnSUc.gjY@flex--zokeefe.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=35fNyYgcKCLIrgcWWXWYggYdW.Ugedafmp-eecnSUc.gjY@flex--zokeefe.bounces.google.com X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 012FA8009B X-Rspam-User: X-Stat-Signature: q4ko3jjtmx7x36k5higw1wdsh7sdt55r X-HE-Tag: 1651700704-457284 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Pipe enum scan_result codes back through return values of functions downstream of khugepaged_scan_file() and khugepaged_scan_pmd() to inform callers if the operation was successful, and if not, why. Since khugepaged_scan_pmd()'s return value already has a specific meaning (whether mmap_lock was unlocked or not), add a bool* argument to khugepaged_scan_pmd() to retrieve this information. Change khugepaged to take action based on the return values of khugepaged_scan_file() and khugepaged_scan_pmd() instead of acting deep within the collapsing functions themselves. Signed-off-by: Zach O'Keefe Acked-by: David Rientjes --- mm/khugepaged.c | 72 ++++++++++++++++++++++++++----------------------- 1 file changed, 39 insertions(+), 33 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 6095fcb3f07c..1314caed65b0 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -732,13 +732,13 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, result = SCAN_SUCCEED; trace_mm_collapse_huge_page_isolate(page, none_or_zero, referenced, writable, result); - return 1; + return SCAN_SUCCEED; } out: release_pte_pages(pte, _pte, compound_pagelist); trace_mm_collapse_huge_page_isolate(page, none_or_zero, referenced, writable, result); - return 0; + return result; } static void __collapse_huge_page_copy(pte_t *pte, struct page *page, @@ -1097,9 +1097,9 @@ static int alloc_charge_hpage(struct mm_struct *mm, struct collapse_control *cc) return SCAN_SUCCEED; } -static void collapse_huge_page(struct mm_struct *mm, unsigned long address, - int referenced, int unmapped, - struct collapse_control *cc) +static int collapse_huge_page(struct mm_struct *mm, unsigned long address, + int referenced, int unmapped, + struct collapse_control *cc) { LIST_HEAD(compound_pagelist); pmd_t *pmd, _pmd; @@ -1107,7 +1107,7 @@ static void collapse_huge_page(struct mm_struct *mm, unsigned long address, pgtable_t pgtable; struct page *new_page; spinlock_t *pmd_ptl, *pte_ptl; - int isolated = 0, result = 0; + int result = SCAN_FAIL; struct vm_area_struct *vma; struct mmu_notifier_range range; @@ -1187,11 +1187,11 @@ static void collapse_huge_page(struct mm_struct *mm, unsigned long address, mmu_notifier_invalidate_range_end(&range); spin_lock(pte_ptl); - isolated = __collapse_huge_page_isolate(vma, address, pte, - &compound_pagelist); + result = __collapse_huge_page_isolate(vma, address, pte, + &compound_pagelist); spin_unlock(pte_ptl); - if (unlikely(!isolated)) { + if (unlikely(result != SCAN_SUCCEED)) { pte_unmap(pte); spin_lock(pmd_ptl); BUG_ON(!pmd_none(*pmd)); @@ -1239,24 +1239,23 @@ static void collapse_huge_page(struct mm_struct *mm, unsigned long address, cc->hpage = NULL; - khugepaged_pages_collapsed++; result = SCAN_SUCCEED; out_up_write: mmap_write_unlock(mm); out_nolock: if (!IS_ERR_OR_NULL(cc->hpage)) mem_cgroup_uncharge(page_folio(cc->hpage)); - trace_mm_collapse_huge_page(mm, isolated, result); - return; + trace_mm_collapse_huge_page(mm, result == SCAN_SUCCEED, result); + return result; } static int khugepaged_scan_pmd(struct mm_struct *mm, struct vm_area_struct *vma, - unsigned long address, + unsigned long address, bool *mmap_locked, struct collapse_control *cc) { pmd_t *pmd; pte_t *pte, *_pte; - int ret = 0, result = 0, referenced = 0; + int result = SCAN_FAIL, referenced = 0; int none_or_zero = 0, shared = 0; struct page *page = NULL; unsigned long _address; @@ -1391,18 +1390,19 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, struct vm_area_struct *vma, result = SCAN_LACK_REFERENCED_PAGE; } else { result = SCAN_SUCCEED; - ret = 1; } out_unmap: pte_unmap_unlock(pte, ptl); - if (ret) { + if (result == SCAN_SUCCEED) { /* collapse_huge_page will return with the mmap_lock released */ - collapse_huge_page(mm, address, referenced, unmapped, cc); + *mmap_locked = false; + result = collapse_huge_page(mm, address, referenced, + unmapped, cc); } out: trace_mm_khugepaged_scan_pmd(mm, page, writable, referenced, none_or_zero, result, unmapped); - return ret; + return result; } static void collect_mm_slot(struct mm_slot *mm_slot) @@ -1679,8 +1679,8 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff) * + restore gaps in the page cache; * + unlock and free huge page; */ -static void collapse_file(struct mm_struct *mm, struct file *file, - pgoff_t start, struct collapse_control *cc) +static int collapse_file(struct mm_struct *mm, struct file *file, + pgoff_t start, struct collapse_control *cc) { struct address_space *mapping = file->f_mapping; struct page *new_page; @@ -1982,8 +1982,6 @@ static void collapse_file(struct mm_struct *mm, struct file *file, */ retract_page_tables(mapping, start); cc->hpage = NULL; - - khugepaged_pages_collapsed++; } else { struct page *page; @@ -2031,10 +2029,11 @@ static void collapse_file(struct mm_struct *mm, struct file *file, if (!IS_ERR_OR_NULL(cc->hpage)) mem_cgroup_uncharge(page_folio(cc->hpage)); /* TODO: tracepoints */ + return result; } -static void khugepaged_scan_file(struct mm_struct *mm, struct file *file, - pgoff_t start, struct collapse_control *cc) +static int khugepaged_scan_file(struct mm_struct *mm, struct file *file, + pgoff_t start, struct collapse_control *cc) { struct page *page = NULL; struct address_space *mapping = file->f_mapping; @@ -2107,15 +2106,16 @@ static void khugepaged_scan_file(struct mm_struct *mm, struct file *file, result = SCAN_EXCEED_NONE_PTE; count_vm_event(THP_SCAN_EXCEED_NONE_PTE); } else { - collapse_file(mm, file, start, cc); + result = collapse_file(mm, file, start, cc); } } /* TODO: tracepoints */ + return result; } #else -static void khugepaged_scan_file(struct mm_struct *mm, struct file *file, - pgoff_t start, struct collapse_control *cc) +static int khugepaged_scan_file(struct mm_struct *mm, struct file *file, pgoff_t start, + struct collapse_control *cc) { BUILD_BUG(); } @@ -2187,7 +2187,9 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, goto skip; while (khugepaged_scan.address < hend) { - int ret; + int result; + bool mmap_locked = true; + cond_resched(); if (unlikely(khugepaged_test_exit(mm))) goto breakouterloop; @@ -2201,17 +2203,21 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, khugepaged_scan.address); mmap_read_unlock(mm); - ret = 1; - khugepaged_scan_file(mm, file, pgoff, cc); + mmap_locked = false; + result = khugepaged_scan_file(mm, file, pgoff, + cc); fput(file); } else { - ret = khugepaged_scan_pmd(mm, vma, - khugepaged_scan.address, cc); + result = khugepaged_scan_pmd(mm, vma, + khugepaged_scan.address, + &mmap_locked, cc); } + if (result == SCAN_SUCCEED) + ++khugepaged_pages_collapsed; /* move to next address */ khugepaged_scan.address += HPAGE_PMD_SIZE; progress += HPAGE_PMD_NR; - if (ret) + if (!mmap_locked) /* we released mmap_lock so break loop */ goto breakouterloop_mmap_lock; if (progress >= pages)