From patchwork Tue May 10 20:32:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12845500 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03E22C433FE for ; Tue, 10 May 2022 20:32:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 96E638D0007; Tue, 10 May 2022 16:32:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 91F2F8D0002; Tue, 10 May 2022 16:32:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7226A8D0007; Tue, 10 May 2022 16:32:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 647F38D0002 for ; Tue, 10 May 2022 16:32:47 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 461C021BB0 for ; Tue, 10 May 2022 20:32:47 +0000 (UTC) X-FDA: 79450981974.19.98F47C7 Received: from mail-pj1-f46.google.com (mail-pj1-f46.google.com [209.85.216.46]) by imf06.hostedemail.com (Postfix) with ESMTP id 70F1B1800AF for ; Tue, 10 May 2022 20:32:44 +0000 (UTC) Received: by mail-pj1-f46.google.com with SMTP id cu23-20020a17090afa9700b001d98d8e53b7so2612960pjb.0 for ; Tue, 10 May 2022 13:32:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=9bzLXPvJlgXInWKC0z0h8GbSnE9t748YFrNrW9sv0OI=; b=WjvoN+m7G642VMfbU7y7zcodnFSwJKNI8fyCg6WKeEpTIH/equCwbRBnEtxhSsnJhg fhCoCrPsm4pYIXuNj1z4bsDWzM1z8+S6z0dYtRLwGekuSs8oaOxcdHyymd7RavAg4K9e ks12ff48l1DYz9LgyOrWAHZrwDGsVuuNXj86JpBJqraJrwv/Rc/5b8V3NEBh2qCkT/JZ n3XtXcyIzKwY2C4SrHnXZ5V2TftIlEcpTLUMa+N7Q6q0+ciTlOD0PAXBUxaikpG6NLvW O4QPt4JKd025ddS0y8luVYzS+aX6rU7lvUlo6sGJpPvf4BRNxZPDPbXUCYoSdnRbIGBn C0DA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9bzLXPvJlgXInWKC0z0h8GbSnE9t748YFrNrW9sv0OI=; b=Eak6Vwm6v63BSAvO6/YtG+3Irfvm6ZtPQ0795cM+SCodepGJMMhb+8+l2vqc6uahCK Zo6kpkB0sRxVA9NVLblcfAgHG7lGyY8IQ5+r/V0xUkgYYCHbPVUOrs5N7Y4Gi1UFj82U MnnzG06tU6b2ZHJEQmOKiB5KaFjeQc+8eWyqShxm5qa5yP4bZKIyiYGIqN0k9vi9ydAF cfErvoZ9r+N9c5HiQPwz9DNmqYUqo08ALJQe65kCM/ze4CLfBXfDTjLFi9p3DJvEKjxG Bitq652Y9h6BBWnXtBCBUqVNrqCdgoFkDO/tddFsWQGLgr3B5tWjNKsql+QFXWOj62x+ ag+A== X-Gm-Message-State: AOAM531zFFbiz+Ep2eYSsvgVWyywwIJhyOtZPsXsYaMTKNBb2Dwdktfx g8sUZJtD4ACMbLPK9eiyXtw= X-Google-Smtp-Source: ABdhPJzetWxUglmKPTIJxEKFg5ditoKwCl0dD+Zohh3huaS4wvuUbDFl6MB0fZWj/K2+e7s1mXtubw== X-Received: by 2002:a17:90a:bd95:b0:1d9:6735:e9ef with SMTP id z21-20020a17090abd9500b001d96735e9efmr1618657pjr.157.1652214765595; Tue, 10 May 2022 13:32:45 -0700 (PDT) Received: from localhost.localdomain (c-67-174-241-145.hsd1.ca.comcast.net. [67.174.241.145]) by smtp.gmail.com with ESMTPSA id v17-20020a1709028d9100b0015e8d4eb1d4sm58898plo.30.2022.05.10.13.32.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 10 May 2022 13:32:45 -0700 (PDT) From: Yang Shi To: vbabka@suse.cz, kirill.shutemov@linux.intel.com, linmiaohe@huawei.com, songliubraving@fb.com, riel@surriel.com, willy@infradead.org, ziy@nvidia.com, tytso@mit.edu, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v4 PATCH 5/8] mm: khugepaged: make khugepaged_enter() void function Date: Tue, 10 May 2022 13:32:19 -0700 Message-Id: <20220510203222.24246-6-shy828301@gmail.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: <20220510203222.24246-1-shy828301@gmail.com> References: <20220510203222.24246-1-shy828301@gmail.com> MIME-Version: 1.0 X-Stat-Signature: c9y8dqoo1jhuzpuy5jospz94p1j1tg4i Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=WjvoN+m7; spf=pass (imf06.hostedemail.com: domain of shy828301@gmail.com designates 209.85.216.46 as permitted sender) smtp.mailfrom=shy828301@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 70F1B1800AF X-HE-Tag: 1652214764-328618 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The most callers of khugepaged_enter() don't care about the return value. Only dup_mmap(), anonymous THP page fault and MADV_HUGEPAGE handle the error by returning -ENOMEM. Actually it is not harmful for them to ignore the error case either. It also sounds overkilling to fail fork() and page fault early due to khugepaged_enter() error, and MADV_HUGEPAGE does set VM_HUGEPAGE flag regardless of the error. Acked-by: Song Liu Acked-by: Vlastmil Babka Signed-off-by: Yang Shi --- include/linux/khugepaged.h | 30 ++++++++++++------------------ kernel/fork.c | 4 +--- mm/huge_memory.c | 4 ++-- mm/khugepaged.c | 18 +++++++----------- 4 files changed, 22 insertions(+), 34 deletions(-) diff --git a/include/linux/khugepaged.h b/include/linux/khugepaged.h index 2fcc01891b47..0423d3619f26 100644 --- a/include/linux/khugepaged.h +++ b/include/linux/khugepaged.h @@ -12,10 +12,10 @@ extern struct attribute_group khugepaged_attr_group; extern int khugepaged_init(void); extern void khugepaged_destroy(void); extern int start_stop_khugepaged(void); -extern int __khugepaged_enter(struct mm_struct *mm); +extern void __khugepaged_enter(struct mm_struct *mm); extern void __khugepaged_exit(struct mm_struct *mm); -extern int khugepaged_enter_vma_merge(struct vm_area_struct *vma, - unsigned long vm_flags); +extern void khugepaged_enter_vma_merge(struct vm_area_struct *vma, + unsigned long vm_flags); extern void khugepaged_min_free_kbytes_update(void); #ifdef CONFIG_SHMEM extern void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr); @@ -40,11 +40,10 @@ static inline void collapse_pte_mapped_thp(struct mm_struct *mm, (transparent_hugepage_flags & \ (1<flags)) - return __khugepaged_enter(mm); - return 0; + __khugepaged_enter(mm); } static inline void khugepaged_exit(struct mm_struct *mm) @@ -53,7 +52,7 @@ static inline void khugepaged_exit(struct mm_struct *mm) __khugepaged_exit(mm); } -static inline int khugepaged_enter(struct vm_area_struct *vma, +static inline void khugepaged_enter(struct vm_area_struct *vma, unsigned long vm_flags) { if (!test_bit(MMF_VM_HUGEPAGE, &vma->vm_mm->flags)) @@ -62,27 +61,22 @@ static inline int khugepaged_enter(struct vm_area_struct *vma, (khugepaged_req_madv() && (vm_flags & VM_HUGEPAGE))) && !(vm_flags & VM_NOHUGEPAGE) && !test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags)) - if (__khugepaged_enter(vma->vm_mm)) - return -ENOMEM; - return 0; + __khugepaged_enter(vma->vm_mm); } #else /* CONFIG_TRANSPARENT_HUGEPAGE */ -static inline int khugepaged_fork(struct mm_struct *mm, struct mm_struct *oldmm) +static inline void khugepaged_fork(struct mm_struct *mm, struct mm_struct *oldmm) { - return 0; } static inline void khugepaged_exit(struct mm_struct *mm) { } -static inline int khugepaged_enter(struct vm_area_struct *vma, - unsigned long vm_flags) +static inline void khugepaged_enter(struct vm_area_struct *vma, + unsigned long vm_flags) { - return 0; } -static inline int khugepaged_enter_vma_merge(struct vm_area_struct *vma, - unsigned long vm_flags) +static inline void khugepaged_enter_vma_merge(struct vm_area_struct *vma, + unsigned long vm_flags) { - return 0; } static inline void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr) diff --git a/kernel/fork.c b/kernel/fork.c index 536dc3289734..6692f5d78371 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -608,9 +608,7 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm, retval = ksm_fork(mm, oldmm); if (retval) goto out; - retval = khugepaged_fork(mm, oldmm); - if (retval) - goto out; + khugepaged_fork(mm, oldmm); retval = mas_expected_entries(&mas, oldmm->map_count); if (retval) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 82434a9d4499..80e8b58b4f39 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -726,8 +726,8 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf) return VM_FAULT_FALLBACK; if (unlikely(anon_vma_prepare(vma))) return VM_FAULT_OOM; - if (unlikely(khugepaged_enter(vma, vma->vm_flags))) - return VM_FAULT_OOM; + khugepaged_enter(vma, vma->vm_flags); + if (!(vmf->flags & FAULT_FLAG_WRITE) && !mm_forbids_zeropage(vma->vm_mm) && transparent_hugepage_use_zero_page()) { diff --git a/mm/khugepaged.c b/mm/khugepaged.c index c0d3215008ba..7815218ab960 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -365,8 +365,7 @@ int hugepage_madvise(struct vm_area_struct *vma, * register it here without waiting a page fault that * may not happen any time soon. */ - if (khugepaged_enter_vma_merge(vma, *vm_flags)) - return -ENOMEM; + khugepaged_enter_vma_merge(vma, *vm_flags); break; case MADV_NOHUGEPAGE: *vm_flags &= ~VM_HUGEPAGE; @@ -475,20 +474,20 @@ static bool hugepage_vma_check(struct vm_area_struct *vma, return true; } -int __khugepaged_enter(struct mm_struct *mm) +void __khugepaged_enter(struct mm_struct *mm) { struct mm_slot *mm_slot; int wakeup; mm_slot = alloc_mm_slot(); if (!mm_slot) - return -ENOMEM; + return; /* __khugepaged_exit() must not run from under us */ VM_BUG_ON_MM(khugepaged_test_exit(mm), mm); if (unlikely(test_and_set_bit(MMF_VM_HUGEPAGE, &mm->flags))) { free_mm_slot(mm_slot); - return 0; + return; } spin_lock(&khugepaged_mm_lock); @@ -504,11 +503,9 @@ int __khugepaged_enter(struct mm_struct *mm) mmgrab(mm); if (wakeup) wake_up_interruptible(&khugepaged_wait); - - return 0; } -int khugepaged_enter_vma_merge(struct vm_area_struct *vma, +void khugepaged_enter_vma_merge(struct vm_area_struct *vma, unsigned long vm_flags) { unsigned long hstart, hend; @@ -519,13 +516,12 @@ int khugepaged_enter_vma_merge(struct vm_area_struct *vma, * file-private shmem THP is not supported. */ if (!hugepage_vma_check(vma, vm_flags)) - return 0; + return; hstart = (vma->vm_start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK; hend = vma->vm_end & HPAGE_PMD_MASK; if (hstart < hend) - return khugepaged_enter(vma, vm_flags); - return 0; + khugepaged_enter(vma, vm_flags); } void __khugepaged_exit(struct mm_struct *mm)