From patchwork Mon Apr 14 22:05:46 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nico Pache X-Patchwork-Id: 14051140 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D7AD91EB5DB for ; Mon, 14 Apr 2025 22:06:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744668413; cv=none; b=EJE1IKCcnh/S8TNyldGl86r1Ks5l+gJfjXFUvXivKM9dPD2APfKt+T++WOi2Difwjw9uoD1sq4oiQblwe9D93lvp4l2pGs+0j8fNn4yWanIo3XA7lTnWrB5YQVJMZ5zIkb08IDlnupuCzen3AqwpKkZ1Ozu8uHag6zNHFewrcEw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744668413; c=relaxed/simple; bh=TLdH4RgeBmG/UXEeV7xGg+DxjW7CzSHJpYueRx02Jlo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=F6NKxW8tW4fEi5BrufUeSODV5lBuQpN6bysJUTp1J24mAHo3WcpuX2O1KkWx2vb9GwCNttdmiMB1C/ZOZ4y871j6IFQjJRDhWJwjEJyWUf+2elBjOx3DlG+U0KuNSRGIGw8Bhph2LbKfWgXICEeJcuaUdbfxQ8piKYVuj6owV6k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=eODExfzE; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="eODExfzE" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1744668410; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=9MTkX1RX/ny0MCuDb2FWl5HwPLjUZ89TaZAF8jSCc7I=; b=eODExfzExLyhBhtSYWIqfGUzVloSInACakyIeJ/Uy/mYBz9F/WKtWyTYPjTEa0B1lVg7og EjO4h+3Up7hQtwTc+VjSe31SkQRZ6UGTUYAgDzokyZOaEq6zcSh9kSWD4xe1d6r/qoOrFQ aM1Vm295/IJjoirdwD6+Dzf9J4jIscE= Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-76-_vzZmv-ZOWO2_nutODPE1Q-1; Mon, 14 Apr 2025 18:06:47 -0400 X-MC-Unique: _vzZmv-ZOWO2_nutODPE1Q-1 X-Mimecast-MFC-AGG-ID: _vzZmv-ZOWO2_nutODPE1Q_1744668403 Received: from mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.15]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 097AB1800EC5; Mon, 14 Apr 2025 22:06:42 +0000 (UTC) Received: from h1.redhat.com (unknown [10.22.64.91]) by mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id B62EB1955BC1; Mon, 14 Apr 2025 22:06:32 +0000 (UTC) From: Nico Pache To: linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: akpm@linux-foundation.org, corbet@lwn.net, rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, david@redhat.com, baohua@kernel.org, baolin.wang@linux.alibaba.com, ryan.roberts@arm.com, willy@infradead.org, peterx@redhat.com, ziy@nvidia.com, wangkefeng.wang@huawei.com, usamaarif642@gmail.com, sunnanyong@huawei.com, vishal.moola@gmail.com, thomas.hellstrom@linux.intel.com, yang@os.amperecomputing.com, kirill.shutemov@linux.intel.com, aarcange@redhat.com, raquini@redhat.com, dev.jain@arm.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, tiwai@suse.de, will@kernel.org, dave.hansen@linux.intel.com, jack@suse.cz, cl@gentwo.org, jglisse@google.com, surenb@google.com, zokeefe@google.com, hannes@cmpxchg.org, rientjes@google.com, mhocko@suse.com Subject: [PATCH v3 01/12] introduce khugepaged_collapse_single_pmd to unify khugepaged and madvise_collapse Date: Mon, 14 Apr 2025 16:05:46 -0600 Message-ID: <20250414220557.35388-2-npache@redhat.com> In-Reply-To: <20250414220557.35388-1-npache@redhat.com> References: <20250414220557.35388-1-npache@redhat.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.15 The khugepaged daemon and madvise_collapse have two different implementations that do almost the same thing. Create khugepaged_collapse_single_pmd to increase code reuse and create an entry point for future khugepaged changes. Refactor madvise_collapse and khugepaged_scan_mm_slot to use the new khugepaged_collapse_single_pmd function. Signed-off-by: Nico Pache --- mm/khugepaged.c | 92 ++++++++++++++++++++++++------------------------- 1 file changed, 46 insertions(+), 46 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index cc945c6ab3bd..c23d80ed9c9c 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -2363,6 +2363,48 @@ static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr, } #endif +/* + * Try to collapse a single PMD starting at a PMD aligned addr, and return + * the results. + */ +static int khugepaged_collapse_single_pmd(unsigned long addr, + struct vm_area_struct *vma, bool *mmap_locked, + struct collapse_control *cc) +{ + int result = SCAN_FAIL; + struct mm_struct *mm = vma->vm_mm; + unsigned long tva_flags = cc->is_khugepaged ? TVA_ENFORCE_SYSFS : 0; + + if (thp_vma_allowable_order(vma, vma->vm_flags, + tva_flags, PMD_ORDER)) { + if (IS_ENABLED(CONFIG_SHMEM) && !vma_is_anonymous(vma)) { + struct file *file = get_file(vma->vm_file); + pgoff_t pgoff = linear_page_index(vma, addr); + + mmap_read_unlock(mm); + *mmap_locked = false; + result = hpage_collapse_scan_file(mm, addr, file, pgoff, + cc); + fput(file); + if (result == SCAN_PTE_MAPPED_HUGEPAGE) { + mmap_read_lock(mm); + if (hpage_collapse_test_exit_or_disable(mm)) + goto end; + result = collapse_pte_mapped_thp(mm, addr, + !cc->is_khugepaged); + mmap_read_unlock(mm); + } + } else { + result = hpage_collapse_scan_pmd(mm, vma, addr, + mmap_locked, cc); + } + if (cc->is_khugepaged && result == SCAN_SUCCEED) + ++khugepaged_pages_collapsed; + } +end: + return result; +} + static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, struct collapse_control *cc) __releases(&khugepaged_mm_lock) @@ -2437,33 +2479,9 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, VM_BUG_ON(khugepaged_scan.address < hstart || khugepaged_scan.address + HPAGE_PMD_SIZE > hend); - if (IS_ENABLED(CONFIG_SHMEM) && !vma_is_anonymous(vma)) { - struct file *file = get_file(vma->vm_file); - pgoff_t pgoff = linear_page_index(vma, - khugepaged_scan.address); - mmap_read_unlock(mm); - mmap_locked = false; - *result = hpage_collapse_scan_file(mm, - khugepaged_scan.address, file, pgoff, cc); - fput(file); - if (*result == SCAN_PTE_MAPPED_HUGEPAGE) { - mmap_read_lock(mm); - if (hpage_collapse_test_exit_or_disable(mm)) - goto breakouterloop; - *result = collapse_pte_mapped_thp(mm, - khugepaged_scan.address, false); - if (*result == SCAN_PMD_MAPPED) - *result = SCAN_SUCCEED; - mmap_read_unlock(mm); - } - } else { - *result = hpage_collapse_scan_pmd(mm, vma, - khugepaged_scan.address, &mmap_locked, cc); - } - - if (*result == SCAN_SUCCEED) - ++khugepaged_pages_collapsed; + *result = khugepaged_collapse_single_pmd(khugepaged_scan.address, + vma, &mmap_locked, cc); /* move to next address */ khugepaged_scan.address += HPAGE_PMD_SIZE; @@ -2783,36 +2801,18 @@ int madvise_collapse(struct vm_area_struct *vma, struct vm_area_struct **prev, mmap_assert_locked(mm); memset(cc->node_load, 0, sizeof(cc->node_load)); nodes_clear(cc->alloc_nmask); - if (IS_ENABLED(CONFIG_SHMEM) && !vma_is_anonymous(vma)) { - struct file *file = get_file(vma->vm_file); - pgoff_t pgoff = linear_page_index(vma, addr); - mmap_read_unlock(mm); - mmap_locked = false; - result = hpage_collapse_scan_file(mm, addr, file, pgoff, - cc); - fput(file); - } else { - result = hpage_collapse_scan_pmd(mm, vma, addr, - &mmap_locked, cc); - } + result = khugepaged_collapse_single_pmd(addr, vma, &mmap_locked, cc); + if (!mmap_locked) *prev = NULL; /* Tell caller we dropped mmap_lock */ -handle_result: switch (result) { case SCAN_SUCCEED: case SCAN_PMD_MAPPED: ++thps; break; case SCAN_PTE_MAPPED_HUGEPAGE: - BUG_ON(mmap_locked); - BUG_ON(*prev); - mmap_read_lock(mm); - result = collapse_pte_mapped_thp(mm, addr, true); - mmap_read_unlock(mm); - goto handle_result; - /* Whitelisted set of results where continuing OK */ case SCAN_PMD_NULL: case SCAN_PTE_NON_PRESENT: case SCAN_PTE_UFFD_WP: From patchwork Mon Apr 14 22:05:47 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nico Pache X-Patchwork-Id: 14051141 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 53F151F3BB4 for ; Mon, 14 Apr 2025 22:06:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744668422; cv=none; b=PDNNhmPLPIycILYlr83rImcrzeC1ywDDDElrRsCbdKq4Ei3IxA5I0UD6WVg+bZbzI3sj+IQpPMWyfqWIyuDRf1guD4B3El/g9iSP8ZITZv+wNjvb8NM5cZpKRI0Fithet1UuZXfGfiITtGq2Acl1JnT/Iz2sv1/gCU5nEnJzvdg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744668422; c=relaxed/simple; bh=PT7ED/Dgmyw7EIj7YLBBaWjK3sGZ0oi61u4rDCtWYUc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ccbzSEkM3hF3OOP06teCPw1V5VQJHntU2rUwsA13bpqGlbmgaDEj2n6KvdX7dzSBzaX/xPk4MtfFdbMM+1qTDfbOrZAQVbCqJ9EtAAIVJyPGQALt375CjfB9I3+p6JG7PPYuWgnvIkGPajNMhnCgDDesVyr1sqo3Kwzpr/zKciM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=fLbzjfUX; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="fLbzjfUX" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1744668419; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GAVT6j5hAGYn/X341c6yb/hpheunO9BHiADgExQjXpc=; b=fLbzjfUXSQe1T6huodGHvBoJYHjUGV2yq0pLIc+tgLfT2+pd76k4i3+pRhKqZuPDIz4X7y +2ELxkxMirr8DXOgKWJkDNTGxmqbA2aZRm4FSf7g1c77m0ohvNXBKGQNGI+ygEiA8RKQJe otibbo0YOAWbthqQr9cxZqJEmJn3iKw= Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-326-i6C8UoaiNj-XKLZahtrhuQ-1; Mon, 14 Apr 2025 18:06:56 -0400 X-MC-Unique: i6C8UoaiNj-XKLZahtrhuQ-1 X-Mimecast-MFC-AGG-ID: i6C8UoaiNj-XKLZahtrhuQ_1744668412 Received: from mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.15]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id B64351955DC6; Mon, 14 Apr 2025 22:06:50 +0000 (UTC) Received: from h1.redhat.com (unknown [10.22.64.91]) by mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 7973A1956094; Mon, 14 Apr 2025 22:06:42 +0000 (UTC) From: Nico Pache To: linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: akpm@linux-foundation.org, corbet@lwn.net, rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, david@redhat.com, baohua@kernel.org, baolin.wang@linux.alibaba.com, ryan.roberts@arm.com, willy@infradead.org, peterx@redhat.com, ziy@nvidia.com, wangkefeng.wang@huawei.com, usamaarif642@gmail.com, sunnanyong@huawei.com, vishal.moola@gmail.com, thomas.hellstrom@linux.intel.com, yang@os.amperecomputing.com, kirill.shutemov@linux.intel.com, aarcange@redhat.com, raquini@redhat.com, dev.jain@arm.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, tiwai@suse.de, will@kernel.org, dave.hansen@linux.intel.com, jack@suse.cz, cl@gentwo.org, jglisse@google.com, surenb@google.com, zokeefe@google.com, hannes@cmpxchg.org, rientjes@google.com, mhocko@suse.com Subject: [PATCH v3 02/12] khugepaged: rename hpage_collapse_* to khugepaged_* Date: Mon, 14 Apr 2025 16:05:47 -0600 Message-ID: <20250414220557.35388-3-npache@redhat.com> In-Reply-To: <20250414220557.35388-1-npache@redhat.com> References: <20250414220557.35388-1-npache@redhat.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.15 functions in khugepaged.c use a mix of hpage_collapse and khugepaged as the function prefix. rename all of them to khugepaged to keep things consistent and slightly shorten the function names. Signed-off-by: Nico Pache --- mm/khugepaged.c | 50 ++++++++++++++++++++++++------------------------- 1 file changed, 25 insertions(+), 25 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index c23d80ed9c9c..9a1242368cba 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -402,14 +402,14 @@ void __init khugepaged_destroy(void) kmem_cache_destroy(mm_slot_cache); } -static inline int hpage_collapse_test_exit(struct mm_struct *mm) +static inline int khugepaged_test_exit(struct mm_struct *mm) { return atomic_read(&mm->mm_users) == 0; } -static inline int hpage_collapse_test_exit_or_disable(struct mm_struct *mm) +static inline int khugepaged_test_exit_or_disable(struct mm_struct *mm) { - return hpage_collapse_test_exit(mm) || + return khugepaged_test_exit(mm) || test_bit(MMF_DISABLE_THP, &mm->flags); } @@ -444,7 +444,7 @@ void __khugepaged_enter(struct mm_struct *mm) int wakeup; /* __khugepaged_exit() must not run from under us */ - VM_BUG_ON_MM(hpage_collapse_test_exit(mm), mm); + VM_BUG_ON_MM(khugepaged_test_exit(mm), mm); if (unlikely(test_and_set_bit(MMF_VM_HUGEPAGE, &mm->flags))) return; @@ -503,7 +503,7 @@ void __khugepaged_exit(struct mm_struct *mm) } else if (mm_slot) { /* * This is required to serialize against - * hpage_collapse_test_exit() (which is guaranteed to run + * khugepaged_test_exit() (which is guaranteed to run * under mmap sem read mode). Stop here (after we return all * pagetables will be destroyed) until khugepaged has finished * working on the pagetables under the mmap_lock. @@ -851,7 +851,7 @@ struct collapse_control khugepaged_collapse_control = { .is_khugepaged = true, }; -static bool hpage_collapse_scan_abort(int nid, struct collapse_control *cc) +static bool khugepaged_scan_abort(int nid, struct collapse_control *cc) { int i; @@ -886,7 +886,7 @@ static inline gfp_t alloc_hugepage_khugepaged_gfpmask(void) } #ifdef CONFIG_NUMA -static int hpage_collapse_find_target_node(struct collapse_control *cc) +static int khugepaged_find_target_node(struct collapse_control *cc) { int nid, target_node = 0, max_value = 0; @@ -905,7 +905,7 @@ static int hpage_collapse_find_target_node(struct collapse_control *cc) return target_node; } #else -static int hpage_collapse_find_target_node(struct collapse_control *cc) +static int khugepaged_find_target_node(struct collapse_control *cc) { return 0; } @@ -925,7 +925,7 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address, struct vm_area_struct *vma; unsigned long tva_flags = cc->is_khugepaged ? TVA_ENFORCE_SYSFS : 0; - if (unlikely(hpage_collapse_test_exit_or_disable(mm))) + if (unlikely(khugepaged_test_exit_or_disable(mm))) return SCAN_ANY_PROCESS; *vmap = vma = find_vma(mm, address); @@ -992,7 +992,7 @@ static int check_pmd_still_valid(struct mm_struct *mm, /* * Bring missing pages in from swap, to complete THP collapse. - * Only done if hpage_collapse_scan_pmd believes it is worthwhile. + * Only done if khugepaged_scan_pmd believes it is worthwhile. * * Called and returns without pte mapped or spinlocks held. * Returns result: if not SCAN_SUCCEED, mmap_lock has been released. @@ -1078,7 +1078,7 @@ static int alloc_charge_folio(struct folio **foliop, struct mm_struct *mm, { gfp_t gfp = (cc->is_khugepaged ? alloc_hugepage_khugepaged_gfpmask() : GFP_TRANSHUGE); - int node = hpage_collapse_find_target_node(cc); + int node = khugepaged_find_target_node(cc); struct folio *folio; folio = __folio_alloc(gfp, HPAGE_PMD_ORDER, node, &cc->alloc_nmask); @@ -1264,7 +1264,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, return result; } -static int hpage_collapse_scan_pmd(struct mm_struct *mm, +static int khugepaged_scan_pmd(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long address, bool *mmap_locked, struct collapse_control *cc) @@ -1378,7 +1378,7 @@ static int hpage_collapse_scan_pmd(struct mm_struct *mm, * hit record. */ node = folio_nid(folio); - if (hpage_collapse_scan_abort(node, cc)) { + if (khugepaged_scan_abort(node, cc)) { result = SCAN_SCAN_ABORT; goto out_unmap; } @@ -1447,7 +1447,7 @@ static void collect_mm_slot(struct khugepaged_mm_slot *mm_slot) lockdep_assert_held(&khugepaged_mm_lock); - if (hpage_collapse_test_exit(mm)) { + if (khugepaged_test_exit(mm)) { /* free mm_slot */ hash_del(&slot->hash); list_del(&slot->mm_node); @@ -1742,7 +1742,7 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff) if (find_pmd_or_thp_or_none(mm, addr, &pmd) != SCAN_SUCCEED) continue; - if (hpage_collapse_test_exit(mm)) + if (khugepaged_test_exit(mm)) continue; /* * When a vma is registered with uffd-wp, we cannot recycle @@ -2264,7 +2264,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, return result; } -static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr, +static int khugepaged_scan_file(struct mm_struct *mm, unsigned long addr, struct file *file, pgoff_t start, struct collapse_control *cc) { @@ -2309,7 +2309,7 @@ static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr, } node = folio_nid(folio); - if (hpage_collapse_scan_abort(node, cc)) { + if (khugepaged_scan_abort(node, cc)) { result = SCAN_SCAN_ABORT; break; } @@ -2355,7 +2355,7 @@ static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr, return result; } #else -static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr, +static int khugepaged_scan_file(struct mm_struct *mm, unsigned long addr, struct file *file, pgoff_t start, struct collapse_control *cc) { @@ -2383,19 +2383,19 @@ static int khugepaged_collapse_single_pmd(unsigned long addr, mmap_read_unlock(mm); *mmap_locked = false; - result = hpage_collapse_scan_file(mm, addr, file, pgoff, + result = khugepaged_scan_file(mm, addr, file, pgoff, cc); fput(file); if (result == SCAN_PTE_MAPPED_HUGEPAGE) { mmap_read_lock(mm); - if (hpage_collapse_test_exit_or_disable(mm)) + if (khugepaged_test_exit_or_disable(mm)) goto end; result = collapse_pte_mapped_thp(mm, addr, !cc->is_khugepaged); mmap_read_unlock(mm); } } else { - result = hpage_collapse_scan_pmd(mm, vma, addr, + result = khugepaged_scan_pmd(mm, vma, addr, mmap_locked, cc); } if (cc->is_khugepaged && result == SCAN_SUCCEED) @@ -2443,7 +2443,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, goto breakouterloop_mmap_lock; progress++; - if (unlikely(hpage_collapse_test_exit_or_disable(mm))) + if (unlikely(khugepaged_test_exit_or_disable(mm))) goto breakouterloop; vma_iter_init(&vmi, mm, khugepaged_scan.address); @@ -2451,7 +2451,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, unsigned long hstart, hend; cond_resched(); - if (unlikely(hpage_collapse_test_exit_or_disable(mm))) { + if (unlikely(khugepaged_test_exit_or_disable(mm))) { progress++; break; } @@ -2473,7 +2473,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, bool mmap_locked = true; cond_resched(); - if (unlikely(hpage_collapse_test_exit_or_disable(mm))) + if (unlikely(khugepaged_test_exit_or_disable(mm))) goto breakouterloop; VM_BUG_ON(khugepaged_scan.address < hstart || @@ -2509,7 +2509,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, * Release the current mm_slot if this mm is about to die, or * if we scanned all vmas of this mm. */ - if (hpage_collapse_test_exit(mm) || !vma) { + if (khugepaged_test_exit(mm) || !vma) { /* * Make sure that if mm_users is reaching zero while * khugepaged runs here, khugepaged_exit will find From patchwork Mon Apr 14 22:05:48 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nico Pache X-Patchwork-Id: 14051142 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BBBEA1EEA43 for ; Mon, 14 Apr 2025 22:07:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744668428; cv=none; b=Z4XkmlixdOVoTikw4b2LjebZLU4ra6fKR8kRQ1xBMi0WQ7VQ2BC5Dkf9OujfhJlt9CdcqQBQZ/UEUf6ZZafFlB4xCt4qqx2IijZEVFCWyBHtBUixHJWLKHvG8s8FvtqOQ6xJ3u2d5i/gVWz5owT1AFN3gF/yvSPo/t4dbyh3eo8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744668428; c=relaxed/simple; bh=Ic69wO3zapFAvTrkUfEAkAtbcSdJCvmarfcgYUQys/Y=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=J5dmTo/J3etC9ZklX2obTc2zFug2zLDLKVD5JAs9cbQ3fxscJ9dlrUVFnniwFfGKfqSCXSa+542/yGKLti1P8hVWjRETSGkXiwbFbTRSDiGWZEVx40fuiKizO3VJoIYOOsPsUaX4lUEXE28n+ap/y4CSdrWhmrD9tWsGjZ0sbpY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=VevqRFEO; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="VevqRFEO" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1744668424; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=bllajHnZh5cfp+bU5/etN2JQ5v5uIxqZbRzp0z1zDM8=; b=VevqRFEOz8KULwVRdy63tEnoO8bshiGoGo2Pljjs2xWQnOY7c9J4mDZtjMYCEAj8EylEjk HAxWo9xVRnlMfdUFTcr5DFZ3KBq/CcAonxEf/B7QoeaFbh/C6jZxLzy23kI23dnq/HVjYU OT+3gWuLRjA0JHY2cZrbUSlw+Fv3rUw= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-8-xxxdRWn3OZyZKTJjJg3MRg-1; Mon, 14 Apr 2025 18:07:03 -0400 X-MC-Unique: xxxdRWn3OZyZKTJjJg3MRg-1 X-Mimecast-MFC-AGG-ID: xxxdRWn3OZyZKTJjJg3MRg_1744668419 Received: from mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.15]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id CA8A719560B8; Mon, 14 Apr 2025 22:06:58 +0000 (UTC) Received: from h1.redhat.com (unknown [10.22.64.91]) by mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 10CC11955BC1; Mon, 14 Apr 2025 22:06:50 +0000 (UTC) From: Nico Pache To: linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: akpm@linux-foundation.org, corbet@lwn.net, rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, david@redhat.com, baohua@kernel.org, baolin.wang@linux.alibaba.com, ryan.roberts@arm.com, willy@infradead.org, peterx@redhat.com, ziy@nvidia.com, wangkefeng.wang@huawei.com, usamaarif642@gmail.com, sunnanyong@huawei.com, vishal.moola@gmail.com, thomas.hellstrom@linux.intel.com, yang@os.amperecomputing.com, kirill.shutemov@linux.intel.com, aarcange@redhat.com, raquini@redhat.com, dev.jain@arm.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, tiwai@suse.de, will@kernel.org, dave.hansen@linux.intel.com, jack@suse.cz, cl@gentwo.org, jglisse@google.com, surenb@google.com, zokeefe@google.com, hannes@cmpxchg.org, rientjes@google.com, mhocko@suse.com Subject: [PATCH v3 03/12] khugepaged: generalize hugepage_vma_revalidate for mTHP support Date: Mon, 14 Apr 2025 16:05:48 -0600 Message-ID: <20250414220557.35388-4-npache@redhat.com> In-Reply-To: <20250414220557.35388-1-npache@redhat.com> References: <20250414220557.35388-1-npache@redhat.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.15 For khugepaged to support different mTHP orders, we must generalize this function for arbitrary orders. No functional change in this patch. Co-developed-by: Dev Jain Signed-off-by: Dev Jain Signed-off-by: Nico Pache --- mm/khugepaged.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 9a1242368cba..61c6ef8fa604 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -920,7 +920,7 @@ static int khugepaged_find_target_node(struct collapse_control *cc) static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address, bool expect_anon, struct vm_area_struct **vmap, - struct collapse_control *cc) + struct collapse_control *cc, int order) { struct vm_area_struct *vma; unsigned long tva_flags = cc->is_khugepaged ? TVA_ENFORCE_SYSFS : 0; @@ -932,9 +932,9 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address, if (!vma) return SCAN_VMA_NULL; - if (!thp_vma_suitable_order(vma, address, PMD_ORDER)) + if (!thp_vma_suitable_order(vma, address, order)) return SCAN_ADDRESS_RANGE; - if (!thp_vma_allowable_order(vma, vma->vm_flags, tva_flags, PMD_ORDER)) + if (!thp_vma_allowable_order(vma, vma->vm_flags, tva_flags, order)) return SCAN_VMA_CHECK; /* * Anon VMA expected, the address may be unmapped then @@ -1130,7 +1130,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, goto out_nolock; mmap_read_lock(mm); - result = hugepage_vma_revalidate(mm, address, true, &vma, cc); + result = hugepage_vma_revalidate(mm, address, true, &vma, cc, HPAGE_PMD_ORDER); if (result != SCAN_SUCCEED) { mmap_read_unlock(mm); goto out_nolock; @@ -1164,7 +1164,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, * mmap_lock. */ mmap_write_lock(mm); - result = hugepage_vma_revalidate(mm, address, true, &vma, cc); + result = hugepage_vma_revalidate(mm, address, true, &vma, cc, HPAGE_PMD_ORDER); if (result != SCAN_SUCCEED) goto out_up_write; /* check if the pmd is still valid */ @@ -2790,7 +2790,7 @@ int madvise_collapse(struct vm_area_struct *vma, struct vm_area_struct **prev, mmap_read_lock(mm); mmap_locked = true; result = hugepage_vma_revalidate(mm, addr, false, &vma, - cc); + cc, HPAGE_PMD_ORDER); if (result != SCAN_SUCCEED) { last_fail = result; goto out_nolock; From patchwork Mon Apr 14 22:05:49 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nico Pache X-Patchwork-Id: 14051143 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2D06D1EB1B5 for ; Mon, 14 Apr 2025 22:07:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744668448; cv=none; b=RRwQ8bSNyNs+NAk8EGjfJRYvQ3a2lb1kmVutJNGwGlnEghdnTlkxAF49FEhuOxned6RxQIxabD8tiPIYfWPcXs9Oo9sxQ4cFV6QSys1ipjCz7N4tbK38QV2FGW/GXS5lBfenzRDQFqIXRArcOWcjnURwIQaVtYxDCyNyq7WYIck= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744668448; c=relaxed/simple; bh=T3OjRe0ihLMb3jVq24ujmP2Xrf2tyCm25cqU5aK1CTQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=dMLIPxHvNMY+WKsU7S/3wmXSN2VnfubbKm1hUOO+yDlofB+Hq8FwKQF9O7wlH08XZ/MMkHjXdjD/G82uPzJQVH29yB+cYkWM7DP+MOFHtkmItAunbq3aiOaQEOy33bReP4glFnDgjTIkV3oSO5jtEqErswFDXMtXMZQezCscQyE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=TW9WMWlG; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="TW9WMWlG" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1744668446; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LwfaqVbYWAe5PPpMULuDSiltysEc1OKRHt4db4/C9b4=; b=TW9WMWlGWI/ZLs0AaA82h4FKAwM57IwiDHrTzVVxMWjLebDkJLmQx4eoBpYgveb2FMv4Jq aCYK/Kdbo9mWDkr9uzIXC3UZLIMw2cXZ2r9SCHFoKOqxZpp5tlGLQ44tLLORs8wDafEeKG bfWyRPBSGlCUFOWYKh/bC9CPnCg684U= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-562-zq8_oPZdMPOsMgNxG3XyGw-1; Mon, 14 Apr 2025 18:07:12 -0400 X-MC-Unique: zq8_oPZdMPOsMgNxG3XyGw-1 X-Mimecast-MFC-AGG-ID: zq8_oPZdMPOsMgNxG3XyGw_1744668428 Received: from mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.15]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 709041954B36; Mon, 14 Apr 2025 22:07:07 +0000 (UTC) Received: from h1.redhat.com (unknown [10.22.64.91]) by mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 251ED1956094; Mon, 14 Apr 2025 22:06:58 +0000 (UTC) From: Nico Pache To: linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: akpm@linux-foundation.org, corbet@lwn.net, rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, david@redhat.com, baohua@kernel.org, baolin.wang@linux.alibaba.com, ryan.roberts@arm.com, willy@infradead.org, peterx@redhat.com, ziy@nvidia.com, wangkefeng.wang@huawei.com, usamaarif642@gmail.com, sunnanyong@huawei.com, vishal.moola@gmail.com, thomas.hellstrom@linux.intel.com, yang@os.amperecomputing.com, kirill.shutemov@linux.intel.com, aarcange@redhat.com, raquini@redhat.com, dev.jain@arm.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, tiwai@suse.de, will@kernel.org, dave.hansen@linux.intel.com, jack@suse.cz, cl@gentwo.org, jglisse@google.com, surenb@google.com, zokeefe@google.com, hannes@cmpxchg.org, rientjes@google.com, mhocko@suse.com Subject: [PATCH v3 04/12] khugepaged: generalize alloc_charge_folio() Date: Mon, 14 Apr 2025 16:05:49 -0600 Message-ID: <20250414220557.35388-5-npache@redhat.com> In-Reply-To: <20250414220557.35388-1-npache@redhat.com> References: <20250414220557.35388-1-npache@redhat.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.15 From: Dev Jain Pass order to alloc_charge_folio() and update mTHP statistics. Co-developed-by: Nico Pache Signed-off-by: Nico Pache Signed-off-by: Dev Jain --- include/linux/huge_mm.h | 2 ++ mm/huge_memory.c | 4 ++++ mm/khugepaged.c | 17 +++++++++++------ 3 files changed, 17 insertions(+), 6 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index e893d546a49f..2acd9c1fc558 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -123,6 +123,8 @@ enum mthp_stat_item { MTHP_STAT_ANON_FAULT_ALLOC, MTHP_STAT_ANON_FAULT_FALLBACK, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE, + MTHP_STAT_COLLAPSE_ALLOC, + MTHP_STAT_COLLAPSE_ALLOC_FAILED, MTHP_STAT_ZSWPOUT, MTHP_STAT_SWPIN, MTHP_STAT_SWPIN_FALLBACK, diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 2a47682d1ab7..6db73d2d0962 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -615,6 +615,8 @@ static struct kobj_attribute _name##_attr = __ATTR_RO(_name) DEFINE_MTHP_STAT_ATTR(anon_fault_alloc, MTHP_STAT_ANON_FAULT_ALLOC); DEFINE_MTHP_STAT_ATTR(anon_fault_fallback, MTHP_STAT_ANON_FAULT_FALLBACK); DEFINE_MTHP_STAT_ATTR(anon_fault_fallback_charge, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE); +DEFINE_MTHP_STAT_ATTR(collapse_alloc, MTHP_STAT_COLLAPSE_ALLOC); +DEFINE_MTHP_STAT_ATTR(collapse_alloc_failed, MTHP_STAT_COLLAPSE_ALLOC_FAILED); DEFINE_MTHP_STAT_ATTR(zswpout, MTHP_STAT_ZSWPOUT); DEFINE_MTHP_STAT_ATTR(swpin, MTHP_STAT_SWPIN); DEFINE_MTHP_STAT_ATTR(swpin_fallback, MTHP_STAT_SWPIN_FALLBACK); @@ -680,6 +682,8 @@ static struct attribute *any_stats_attrs[] = { #endif &split_attr.attr, &split_failed_attr.attr, + &collapse_alloc_attr.attr, + &collapse_alloc_failed_attr.attr, NULL, }; diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 61c6ef8fa604..78caa6d227c3 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1074,21 +1074,26 @@ static int __collapse_huge_page_swapin(struct mm_struct *mm, } static int alloc_charge_folio(struct folio **foliop, struct mm_struct *mm, - struct collapse_control *cc) + struct collapse_control *cc, u8 order) { gfp_t gfp = (cc->is_khugepaged ? alloc_hugepage_khugepaged_gfpmask() : GFP_TRANSHUGE); int node = khugepaged_find_target_node(cc); struct folio *folio; - folio = __folio_alloc(gfp, HPAGE_PMD_ORDER, node, &cc->alloc_nmask); + folio = __folio_alloc(gfp, order, node, &cc->alloc_nmask); if (!folio) { *foliop = NULL; - count_vm_event(THP_COLLAPSE_ALLOC_FAILED); + if (order == HPAGE_PMD_ORDER) + count_vm_event(THP_COLLAPSE_ALLOC_FAILED); + count_mthp_stat(order, MTHP_STAT_COLLAPSE_ALLOC_FAILED); return SCAN_ALLOC_HUGE_PAGE_FAIL; } - count_vm_event(THP_COLLAPSE_ALLOC); + if (order == HPAGE_PMD_ORDER) + count_vm_event(THP_COLLAPSE_ALLOC); + count_mthp_stat(order, MTHP_STAT_COLLAPSE_ALLOC); + if (unlikely(mem_cgroup_charge(folio, mm, gfp))) { folio_put(folio); *foliop = NULL; @@ -1125,7 +1130,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, */ mmap_read_unlock(mm); - result = alloc_charge_folio(&folio, mm, cc); + result = alloc_charge_folio(&folio, mm, cc, HPAGE_PMD_ORDER); if (result != SCAN_SUCCEED) goto out_nolock; @@ -1849,7 +1854,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, VM_BUG_ON(!IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && !is_shmem); VM_BUG_ON(start & (HPAGE_PMD_NR - 1)); - result = alloc_charge_folio(&new_folio, mm, cc); + result = alloc_charge_folio(&new_folio, mm, cc, HPAGE_PMD_ORDER); if (result != SCAN_SUCCEED) goto out; From patchwork Mon Apr 14 22:05:50 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nico Pache X-Patchwork-Id: 14051144 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2D0CC1EB5DB for ; Mon, 14 Apr 2025 22:07:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744668449; cv=none; b=BVI36oMbPhBSPvJNs/WnKB6O4zjK3nlbv6kty7ci9Q4BwiPGqusCbRbCriFHV7OuT4b8Dr3o28EL6uBWu2ACXl/IT9A+DWuGMb2peHiHgKUpL913g8EcQ2vtuVJStw+pKUUwIlwz4VHbforAdG0BRiq4JFQEmh5kxsGHrdnnQ6I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744668449; c=relaxed/simple; bh=5RHZPDigPOBtpL6oIQclBcYjsRjlNE6+j67q7xvrSlw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=iCY36K/dzrWt6u4v7DW7lEPWrCnWHDrYab8Kt8oyNlGV0ycKtuOV39Nfadaw+rM5wVKt7hv+4AKFy93hQjl9wwp3M+sBvakGoGWBsB9GBL+mRz774fPRrkAU4OoYsJpC8lGpyRHyzEy6h9CxPTRlfh7Aa82ZaxZKT3fNtH7TlDA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=QKtw4FYs; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="QKtw4FYs" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1744668446; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WBpkdN1IxbpGnGg+o5SW76ACdkIw3R1J6DYWqYI6u8o=; b=QKtw4FYsk4DM+g1mhilkeund+eAu2X7nOxpNi2u5iklOM15foddgcQGxLZrB2++M0a+lV8 ildZ6sP7M9eNDwni6bpwrEQt09Taah279HE4yDQPxxOlT18h2uOHGqPJ7Qj5sqCf4TE95S xCEtsnW8/i3ORYzdgrxvArnF3xfXD1Q= Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-246-A1YYRJFVMiKXgZIzC1ix7A-1; Mon, 14 Apr 2025 18:07:21 -0400 X-MC-Unique: A1YYRJFVMiKXgZIzC1ix7A-1 X-Mimecast-MFC-AGG-ID: A1YYRJFVMiKXgZIzC1ix7A_1744668436 Received: from mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.15]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 8E9711800EC5; Mon, 14 Apr 2025 22:07:15 +0000 (UTC) Received: from h1.redhat.com (unknown [10.22.64.91]) by mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id DFA311955BC1; Mon, 14 Apr 2025 22:07:07 +0000 (UTC) From: Nico Pache To: linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: akpm@linux-foundation.org, corbet@lwn.net, rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, david@redhat.com, baohua@kernel.org, baolin.wang@linux.alibaba.com, ryan.roberts@arm.com, willy@infradead.org, peterx@redhat.com, ziy@nvidia.com, wangkefeng.wang@huawei.com, usamaarif642@gmail.com, sunnanyong@huawei.com, vishal.moola@gmail.com, thomas.hellstrom@linux.intel.com, yang@os.amperecomputing.com, kirill.shutemov@linux.intel.com, aarcange@redhat.com, raquini@redhat.com, dev.jain@arm.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, tiwai@suse.de, will@kernel.org, dave.hansen@linux.intel.com, jack@suse.cz, cl@gentwo.org, jglisse@google.com, surenb@google.com, zokeefe@google.com, hannes@cmpxchg.org, rientjes@google.com, mhocko@suse.com Subject: [PATCH v3 05/12] khugepaged: generalize __collapse_huge_page_* for mTHP support Date: Mon, 14 Apr 2025 16:05:50 -0600 Message-ID: <20250414220557.35388-6-npache@redhat.com> In-Reply-To: <20250414220557.35388-1-npache@redhat.com> References: <20250414220557.35388-1-npache@redhat.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.15 generalize the order of the __collapse_huge_page_* functions to support future mTHP collapse. mTHP collapse can suffer from incosistant behavior, and memory waste "creep". disable swapin and shared support for mTHP collapse. No functional changes in this patch. Co-developed-by: Dev Jain Signed-off-by: Dev Jain Signed-off-by: Nico Pache --- mm/khugepaged.c | 46 ++++++++++++++++++++++++++++------------------ 1 file changed, 28 insertions(+), 18 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 78caa6d227c3..dfecedc6a515 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -565,15 +565,17 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, unsigned long address, pte_t *pte, struct collapse_control *cc, - struct list_head *compound_pagelist) + struct list_head *compound_pagelist, + u8 order) { struct page *page = NULL; struct folio *folio = NULL; pte_t *_pte; int none_or_zero = 0, shared = 0, result = SCAN_FAIL, referenced = 0; bool writable = false; + int scaled_none = khugepaged_max_ptes_none >> (HPAGE_PMD_ORDER - order); - for (_pte = pte; _pte < pte + HPAGE_PMD_NR; + for (_pte = pte; _pte < pte + (1 << order); _pte++, address += PAGE_SIZE) { pte_t pteval = ptep_get(_pte); if (pte_none(pteval) || (pte_present(pteval) && @@ -581,7 +583,7 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, ++none_or_zero; if (!userfaultfd_armed(vma) && (!cc->is_khugepaged || - none_or_zero <= khugepaged_max_ptes_none)) { + none_or_zero <= scaled_none)) { continue; } else { result = SCAN_EXCEED_NONE_PTE; @@ -609,8 +611,8 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, /* See hpage_collapse_scan_pmd(). */ if (folio_maybe_mapped_shared(folio)) { ++shared; - if (cc->is_khugepaged && - shared > khugepaged_max_ptes_shared) { + if (order != HPAGE_PMD_ORDER || (cc->is_khugepaged && + shared > khugepaged_max_ptes_shared)) { result = SCAN_EXCEED_SHARED_PTE; count_vm_event(THP_SCAN_EXCEED_SHARED_PTE); goto out; @@ -711,13 +713,14 @@ static void __collapse_huge_page_copy_succeeded(pte_t *pte, struct vm_area_struct *vma, unsigned long address, spinlock_t *ptl, - struct list_head *compound_pagelist) + struct list_head *compound_pagelist, + u8 order) { struct folio *src, *tmp; pte_t *_pte; pte_t pteval; - for (_pte = pte; _pte < pte + HPAGE_PMD_NR; + for (_pte = pte; _pte < pte + (1 << order); _pte++, address += PAGE_SIZE) { pteval = ptep_get(_pte); if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) { @@ -764,7 +767,8 @@ static void __collapse_huge_page_copy_failed(pte_t *pte, pmd_t *pmd, pmd_t orig_pmd, struct vm_area_struct *vma, - struct list_head *compound_pagelist) + struct list_head *compound_pagelist, + u8 order) { spinlock_t *pmd_ptl; @@ -781,7 +785,7 @@ static void __collapse_huge_page_copy_failed(pte_t *pte, * Release both raw and compound pages isolated * in __collapse_huge_page_isolate. */ - release_pte_pages(pte, pte + HPAGE_PMD_NR, compound_pagelist); + release_pte_pages(pte, pte + (1 << order), compound_pagelist); } /* @@ -802,7 +806,7 @@ static void __collapse_huge_page_copy_failed(pte_t *pte, static int __collapse_huge_page_copy(pte_t *pte, struct folio *folio, pmd_t *pmd, pmd_t orig_pmd, struct vm_area_struct *vma, unsigned long address, spinlock_t *ptl, - struct list_head *compound_pagelist) + struct list_head *compound_pagelist, u8 order) { unsigned int i; int result = SCAN_SUCCEED; @@ -810,7 +814,7 @@ static int __collapse_huge_page_copy(pte_t *pte, struct folio *folio, /* * Copying pages' contents is subject to memory poison at any iteration. */ - for (i = 0; i < HPAGE_PMD_NR; i++) { + for (i = 0; i < (1 << order); i++) { pte_t pteval = ptep_get(pte + i); struct page *page = folio_page(folio, i); unsigned long src_addr = address + i * PAGE_SIZE; @@ -829,10 +833,10 @@ static int __collapse_huge_page_copy(pte_t *pte, struct folio *folio, if (likely(result == SCAN_SUCCEED)) __collapse_huge_page_copy_succeeded(pte, vma, address, ptl, - compound_pagelist); + compound_pagelist, order); else __collapse_huge_page_copy_failed(pte, pmd, orig_pmd, vma, - compound_pagelist); + compound_pagelist, order); return result; } @@ -1000,11 +1004,11 @@ static int check_pmd_still_valid(struct mm_struct *mm, static int __collapse_huge_page_swapin(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long haddr, pmd_t *pmd, - int referenced) + int referenced, u8 order) { int swapped_in = 0; vm_fault_t ret = 0; - unsigned long address, end = haddr + (HPAGE_PMD_NR * PAGE_SIZE); + unsigned long address, end = haddr + (PAGE_SIZE << order); int result; pte_t *pte = NULL; spinlock_t *ptl; @@ -1035,6 +1039,12 @@ static int __collapse_huge_page_swapin(struct mm_struct *mm, if (!is_swap_pte(vmf.orig_pte)) continue; + /* Dont swapin for mTHP collapse */ + if (order != HPAGE_PMD_ORDER) { + result = SCAN_EXCEED_SWAP_PTE; + goto out; + } + vmf.pte = pte; vmf.ptl = ptl; ret = do_swap_page(&vmf); @@ -1154,7 +1164,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, * that case. Continuing to collapse causes inconsistency. */ result = __collapse_huge_page_swapin(mm, vma, address, pmd, - referenced); + referenced, HPAGE_PMD_ORDER); if (result != SCAN_SUCCEED) goto out_nolock; } @@ -1201,7 +1211,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, pte = pte_offset_map_lock(mm, &_pmd, address, &pte_ptl); if (pte) { result = __collapse_huge_page_isolate(vma, address, pte, cc, - &compound_pagelist); + &compound_pagelist, HPAGE_PMD_ORDER); spin_unlock(pte_ptl); } else { result = SCAN_PMD_NULL; @@ -1231,7 +1241,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, result = __collapse_huge_page_copy(pte, folio, pmd, _pmd, vma, address, pte_ptl, - &compound_pagelist); + &compound_pagelist, HPAGE_PMD_ORDER); pte_unmap(pte); if (unlikely(result != SCAN_SUCCEED)) goto out_up_write; From patchwork Mon Apr 14 22:05:51 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nico Pache X-Patchwork-Id: 14051145 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 70CC41F3BBB for ; Mon, 14 Apr 2025 22:07:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744668454; cv=none; b=aqYTVJul4FlCyavz1tdMg93sEaRq/C1JPJ8yXBxQUUTiATQj2xE5hPRdL/CmI8aXAMHDxy8gwmgMq8Si2YDxf6pO+TuLYuY/qCp71tBYnWRpFDmxVc4hYCOW3H3WUwU85zsJFGBG6eeNyQjXg/+3cVA6wkOgFdeh67v9Ott9QII= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744668454; c=relaxed/simple; bh=l5V3BtRuY+ADbp3EayTrywLFE9gmsptFOi9IYSqmQcM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=R/w424vAjQ8THIDJu3cL5+nFXLiXBSuj2bQenouKn/GT+TAOMx9HTDCA6J4KgkZJ7atp0Z4S41m0KAv4dlAhob3+8knjm1ajxVGBW/OU9ebwPthy+N5ojz8YDWlI3hLo1GYxqi3C9WAXy4wI5h4AQqNazO9OnR+9WZxf8TqO6O8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=EPqDLzkk; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="EPqDLzkk" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1744668451; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mFXVbXpUlZP3jDzCFrP5W8zsGmKJ7GyJXlGGdBRRcJ8=; b=EPqDLzkkQax1vkVs3VZzmhemWwq2XhloK+q3IVdCTDCeuArlCtdo+3USvLLKKqcHh6A8aD UxUSSzuEVI3U5NPhY1lY+l6lmA8+aqSPAROc5nMMdkHDFb6bp+OevA7QckASuBp5yg238f Ep9qu6KoKMdPjh5u8dQPyJ+pgzREDic= Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-345-lIUx88k_MvqQzURPmO4p-g-1; Mon, 14 Apr 2025 18:07:27 -0400 X-MC-Unique: lIUx88k_MvqQzURPmO4p-g-1 X-Mimecast-MFC-AGG-ID: lIUx88k_MvqQzURPmO4p-g_1744668443 Received: from mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.15]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 76B1B1800EC5; Mon, 14 Apr 2025 22:07:23 +0000 (UTC) Received: from h1.redhat.com (unknown [10.22.64.91]) by mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id DB93E1955BC2; Mon, 14 Apr 2025 22:07:15 +0000 (UTC) From: Nico Pache To: linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: akpm@linux-foundation.org, corbet@lwn.net, rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, david@redhat.com, baohua@kernel.org, baolin.wang@linux.alibaba.com, ryan.roberts@arm.com, willy@infradead.org, peterx@redhat.com, ziy@nvidia.com, wangkefeng.wang@huawei.com, usamaarif642@gmail.com, sunnanyong@huawei.com, vishal.moola@gmail.com, thomas.hellstrom@linux.intel.com, yang@os.amperecomputing.com, kirill.shutemov@linux.intel.com, aarcange@redhat.com, raquini@redhat.com, dev.jain@arm.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, tiwai@suse.de, will@kernel.org, dave.hansen@linux.intel.com, jack@suse.cz, cl@gentwo.org, jglisse@google.com, surenb@google.com, zokeefe@google.com, hannes@cmpxchg.org, rientjes@google.com, mhocko@suse.com Subject: [PATCH v3 06/12] khugepaged: introduce khugepaged_scan_bitmap for mTHP support Date: Mon, 14 Apr 2025 16:05:51 -0600 Message-ID: <20250414220557.35388-7-npache@redhat.com> In-Reply-To: <20250414220557.35388-1-npache@redhat.com> References: <20250414220557.35388-1-npache@redhat.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.15 khugepaged scans PMD ranges for potential collapse to a hugepage. To add mTHP support we use this scan to instead record chunks of fully utilized sections of the PMD. create a bitmap to represent a PMD in order MIN_MTHP_ORDER chunks. by default we will set this to order 3. The reasoning is that for 4K 512 PMD size this results in a 64 bit bitmap which has some optimizations. For other arches like ARM64 64K, we can set a larger order if needed. khugepaged_scan_bitmap uses a stack struct to recursively scan a bitmap that represents chunks of utilized regions. We can then determine what mTHP size fits best and in the following patch, we set this bitmap while scanning the PMD. max_ptes_none is used as a scale to determine how "full" an order must be before being considered for collapse. If a order is set to "always" lets always collapse to that order in a greedy manner. Signed-off-by: Nico Pache --- include/linux/khugepaged.h | 4 ++ mm/khugepaged.c | 94 ++++++++++++++++++++++++++++++++++---- 2 files changed, 89 insertions(+), 9 deletions(-) diff --git a/include/linux/khugepaged.h b/include/linux/khugepaged.h index 1f46046080f5..60d41215bc1a 100644 --- a/include/linux/khugepaged.h +++ b/include/linux/khugepaged.h @@ -1,6 +1,10 @@ /* SPDX-License-Identifier: GPL-2.0 */ #ifndef _LINUX_KHUGEPAGED_H #define _LINUX_KHUGEPAGED_H +#define KHUGEPAGED_MIN_MTHP_ORDER 3 +#define KHUGEPAGED_MIN_MTHP_NR (1<mthp_bitmap_stack[++top] = (struct scan_bit_state) + { HPAGE_PMD_ORDER - KHUGEPAGED_MIN_MTHP_ORDER, 0 }; + + while (top >= 0) { + state = cc->mthp_bitmap_stack[top--]; + order = state.order + KHUGEPAGED_MIN_MTHP_ORDER; + offset = state.offset; + num_chunks = 1 << (state.order); + // Skip mTHP orders that are not enabled + if (!test_bit(order, &enabled_orders)) + goto next; + + // copy the relavant section to a new bitmap + bitmap_shift_right(cc->mthp_bitmap_temp, cc->mthp_bitmap, offset, + MTHP_BITMAP_SIZE); + + bits_set = bitmap_weight(cc->mthp_bitmap_temp, num_chunks); + threshold_bits = (HPAGE_PMD_NR - khugepaged_max_ptes_none - 1) + >> (HPAGE_PMD_ORDER - state.order); + + //Check if the region is "almost full" based on the threshold + if (bits_set > threshold_bits || is_pmd_only + || test_bit(order, &huge_anon_orders_always)) { + ret = collapse_huge_page(mm, address, referenced, unmapped, cc, + mmap_locked, order, offset * KHUGEPAGED_MIN_MTHP_NR); + if (ret == SCAN_SUCCEED) { + collapsed += (1 << order); + continue; + } + } + +next: + if (state.order > 0) { + next_order = state.order - 1; + mid_offset = offset + (num_chunks / 2); + cc->mthp_bitmap_stack[++top] = (struct scan_bit_state) + { next_order, mid_offset }; + cc->mthp_bitmap_stack[++top] = (struct scan_bit_state) + { next_order, offset }; + } + } + return collapsed; +} + static int khugepaged_scan_pmd(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long address, bool *mmap_locked, @@ -1445,9 +1523,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, pte_unmap_unlock(pte, ptl); if (result == SCAN_SUCCEED) { result = collapse_huge_page(mm, address, referenced, - unmapped, cc); - /* collapse_huge_page will return with the mmap_lock released */ - *mmap_locked = false; + unmapped, cc, mmap_locked, HPAGE_PMD_ORDER, 0); } out: trace_mm_khugepaged_scan_pmd(mm, &folio->page, writable, referenced, From patchwork Mon Apr 14 22:05:52 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nico Pache X-Patchwork-Id: 14051146 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DFB1F1F4177 for ; Mon, 14 Apr 2025 22:07:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744668461; cv=none; b=CEX42IzfzG83qGKIgpqtKhiUPC8uqpO/6qoA5dPrWHQJIeNw3z9PcFNGfh2IKzhv3m7SqrSyKxb6eouDxahDx62zaWWiqtOzGleqFQePwECb8p8fnKEQAmPxdJieafRYdxipUsNAWYy282qp3NFYA8HnsLVIuab7HP90Vmpdcpw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744668461; c=relaxed/simple; bh=XFfYPDDF9iEA1DBoDwJ3CzsrCHn6Gdb1gRg1rX1DRRg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PuFM7/ghDJ1tDvomR59hAzh74nQqrlWC6173+S1jatrVg2A6F8GIpg0aDgZ0gLgdlSvM1ZQ+b4k7jKa51ty4eMxi8E18oTzI5vd9k5GpDFKfMRx8EHrTPNaruB/lAXjesfmcRiyvrhhWcT14BjxelYGK3Y5+hGj7vdDNXlsFTCM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=BJljogL4; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="BJljogL4" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1744668459; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qtzDELGqTVE9YoyCw6Exie8oZagkEva6Kmv70R0L40s=; b=BJljogL49r46CHzpggcFahSsZNEigbv1BUVsEHFkRlxLEuA8MIWQrqTzScJDHcWNrr4nnE s58/SFah+Jku8YaGnxbbPwynAR6Uvj7TMSl8z6epiCMcRAW2/uCZ/wHpvmYQlrNCRTGo2f 297auwDbsfsD5m17Nu9OGLuqBxX6k78= Received: from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-629-wMBGh-2mPMizjjMD_zyndw-1; Mon, 14 Apr 2025 18:07:35 -0400 X-MC-Unique: wMBGh-2mPMizjjMD_zyndw-1 X-Mimecast-MFC-AGG-ID: wMBGh-2mPMizjjMD_zyndw_1744668451 Received: from mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.15]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 5AD4E18001E2; Mon, 14 Apr 2025 22:07:31 +0000 (UTC) Received: from h1.redhat.com (unknown [10.22.64.91]) by mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id C60051956094; Mon, 14 Apr 2025 22:07:23 +0000 (UTC) From: Nico Pache To: linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: akpm@linux-foundation.org, corbet@lwn.net, rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, david@redhat.com, baohua@kernel.org, baolin.wang@linux.alibaba.com, ryan.roberts@arm.com, willy@infradead.org, peterx@redhat.com, ziy@nvidia.com, wangkefeng.wang@huawei.com, usamaarif642@gmail.com, sunnanyong@huawei.com, vishal.moola@gmail.com, thomas.hellstrom@linux.intel.com, yang@os.amperecomputing.com, kirill.shutemov@linux.intel.com, aarcange@redhat.com, raquini@redhat.com, dev.jain@arm.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, tiwai@suse.de, will@kernel.org, dave.hansen@linux.intel.com, jack@suse.cz, cl@gentwo.org, jglisse@google.com, surenb@google.com, zokeefe@google.com, hannes@cmpxchg.org, rientjes@google.com, mhocko@suse.com Subject: [PATCH v3 07/12] khugepaged: add mTHP support Date: Mon, 14 Apr 2025 16:05:52 -0600 Message-ID: <20250414220557.35388-8-npache@redhat.com> In-Reply-To: <20250414220557.35388-1-npache@redhat.com> References: <20250414220557.35388-1-npache@redhat.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.15 Introduce the ability for khugepaged to collapse to different mTHP sizes. While scanning PMD ranges for potential collapse candidates, keep track of pages in MIN_MTHP_ORDER chunks via a bitmap. Each bit represents a utilized region of order MIN_MTHP_ORDER ptes. If mTHPs are enabled we remove the restriction of max_ptes_none during the scan phase so we dont bailout early and miss potential mTHP candidates. After the scan is complete we will perform binary recursion on the bitmap to determine which mTHP size would be most efficient to collapse to. max_ptes_none will be scaled by the attempted collapse order to determine how full a THP must be to be eligible. If a mTHP collapse is attempted, but contains swapped out, or shared pages, we dont perform the collapse. Signed-off-by: Nico Pache --- include/linux/khugepaged.h | 2 +- mm/khugepaged.c | 122 ++++++++++++++++++++++++++----------- 2 files changed, 89 insertions(+), 35 deletions(-) diff --git a/include/linux/khugepaged.h b/include/linux/khugepaged.h index 60d41215bc1a..18fe6eb5051d 100644 --- a/include/linux/khugepaged.h +++ b/include/linux/khugepaged.h @@ -1,7 +1,7 @@ /* SPDX-License-Identifier: GPL-2.0 */ #ifndef _LINUX_KHUGEPAGED_H #define _LINUX_KHUGEPAGED_H -#define KHUGEPAGED_MIN_MTHP_ORDER 3 +#define KHUGEPAGED_MIN_MTHP_ORDER 2 #define KHUGEPAGED_MIN_MTHP_NR (1<anon_vma); - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, address, - address + HPAGE_PMD_SIZE); + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, _address, + _address + (PAGE_SIZE << order)); mmu_notifier_invalidate_range_start(&range); pmd_ptl = pmd_lock(mm, pmd); /* probably unnecessary */ + /* * This removes any huge TLB entry from the CPU so we won't allow * huge and small TLB entries for the same virtual address to @@ -1226,10 +1230,10 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, mmu_notifier_invalidate_range_end(&range); tlb_remove_table_sync_one(); - pte = pte_offset_map_lock(mm, &_pmd, address, &pte_ptl); + pte = pte_offset_map_lock(mm, &_pmd, _address, &pte_ptl); if (pte) { - result = __collapse_huge_page_isolate(vma, address, pte, cc, - &compound_pagelist, HPAGE_PMD_ORDER); + result = __collapse_huge_page_isolate(vma, _address, pte, cc, + &compound_pagelist, order); spin_unlock(pte_ptl); } else { result = SCAN_PMD_NULL; @@ -1258,8 +1262,8 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, anon_vma_unlock_write(vma->anon_vma); result = __collapse_huge_page_copy(pte, folio, pmd, _pmd, - vma, address, pte_ptl, - &compound_pagelist, HPAGE_PMD_ORDER); + vma, _address, pte_ptl, + &compound_pagelist, order); pte_unmap(pte); if (unlikely(result != SCAN_SUCCEED)) goto out_up_write; @@ -1270,20 +1274,35 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, * write. */ __folio_mark_uptodate(folio); - pgtable = pmd_pgtable(_pmd); - - _pmd = mk_huge_pmd(&folio->page, vma->vm_page_prot); - _pmd = maybe_pmd_mkwrite(pmd_mkdirty(_pmd), vma); - - spin_lock(pmd_ptl); - BUG_ON(!pmd_none(*pmd)); - folio_add_new_anon_rmap(folio, vma, address, RMAP_EXCLUSIVE); - folio_add_lru_vma(folio, vma); - pgtable_trans_huge_deposit(mm, pmd, pgtable); - set_pmd_at(mm, address, pmd, _pmd); - update_mmu_cache_pmd(vma, address, pmd); - deferred_split_folio(folio, false); - spin_unlock(pmd_ptl); + if (order == HPAGE_PMD_ORDER) { + pgtable = pmd_pgtable(_pmd); + _pmd = mk_huge_pmd(&folio->page, vma->vm_page_prot); + _pmd = maybe_pmd_mkwrite(pmd_mkdirty(_pmd), vma); + + spin_lock(pmd_ptl); + BUG_ON(!pmd_none(*pmd)); + folio_add_new_anon_rmap(folio, vma, _address, RMAP_EXCLUSIVE); + folio_add_lru_vma(folio, vma); + pgtable_trans_huge_deposit(mm, pmd, pgtable); + set_pmd_at(mm, address, pmd, _pmd); + update_mmu_cache_pmd(vma, address, pmd); + deferred_split_folio(folio, false); + spin_unlock(pmd_ptl); + } else { //mTHP + mthp_pte = mk_pte(&folio->page, vma->vm_page_prot); + mthp_pte = maybe_mkwrite(pte_mkdirty(mthp_pte), vma); + + spin_lock(pmd_ptl); + folio_ref_add(folio, (1 << order) - 1); + folio_add_new_anon_rmap(folio, vma, _address, RMAP_EXCLUSIVE); + folio_add_lru_vma(folio, vma); + set_ptes(vma->vm_mm, _address, pte, mthp_pte, (1 << order)); + update_mmu_cache_range(NULL, vma, _address, pte, (1 << order)); + + smp_wmb(); /* make pte visible before pmd */ + pmd_populate(mm, pmd, pmd_pgtable(_pmd)); + spin_unlock(pmd_ptl); + } folio = NULL; @@ -1364,31 +1383,58 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, { pmd_t *pmd; pte_t *pte, *_pte; + int i; int result = SCAN_FAIL, referenced = 0; int none_or_zero = 0, shared = 0; struct page *page = NULL; struct folio *folio = NULL; unsigned long _address; + unsigned long enabled_orders; spinlock_t *ptl; int node = NUMA_NO_NODE, unmapped = 0; + bool is_pmd_only; bool writable = false; - + int chunk_none_count = 0; + int scaled_none = khugepaged_max_ptes_none >> (HPAGE_PMD_ORDER - KHUGEPAGED_MIN_MTHP_ORDER); + unsigned long tva_flags = cc->is_khugepaged ? TVA_ENFORCE_SYSFS : 0; VM_BUG_ON(address & ~HPAGE_PMD_MASK); result = find_pmd_or_thp_or_none(mm, address, &pmd); if (result != SCAN_SUCCEED) goto out; + bitmap_zero(cc->mthp_bitmap, MAX_MTHP_BITMAP_SIZE); + bitmap_zero(cc->mthp_bitmap_temp, MAX_MTHP_BITMAP_SIZE); memset(cc->node_load, 0, sizeof(cc->node_load)); nodes_clear(cc->alloc_nmask); + + enabled_orders = thp_vma_allowable_orders(vma, vma->vm_flags, + tva_flags, THP_ORDERS_ALL_ANON); + + is_pmd_only = (enabled_orders == (1 << HPAGE_PMD_ORDER)); + pte = pte_offset_map_lock(mm, pmd, address, &ptl); if (!pte) { result = SCAN_PMD_NULL; goto out; } - for (_address = address, _pte = pte; _pte < pte + HPAGE_PMD_NR; - _pte++, _address += PAGE_SIZE) { + for (i = 0; i < HPAGE_PMD_NR; i++) { + /* + * we are reading in KHUGEPAGED_MIN_MTHP_NR page chunks. if + * there are pages in this chunk keep track of it in the bitmap + * for mTHP collapsing. + */ + if (i % KHUGEPAGED_MIN_MTHP_NR == 0) { + if (chunk_none_count <= scaled_none) + bitmap_set(cc->mthp_bitmap, + i / KHUGEPAGED_MIN_MTHP_NR, 1); + + chunk_none_count = 0; + } + + _pte = pte + i; + _address = address + i * PAGE_SIZE; pte_t pteval = ptep_get(_pte); if (is_swap_pte(pteval)) { ++unmapped; @@ -1411,10 +1457,11 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, } } if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) { + ++chunk_none_count; ++none_or_zero; if (!userfaultfd_armed(vma) && - (!cc->is_khugepaged || - none_or_zero <= khugepaged_max_ptes_none)) { + (!cc->is_khugepaged || !is_pmd_only || + none_or_zero <= khugepaged_max_ptes_none)) { continue; } else { result = SCAN_EXCEED_NONE_PTE; @@ -1510,6 +1557,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, address))) referenced++; } + if (!writable) { result = SCAN_PAGE_RO; } else if (cc->is_khugepaged && @@ -1522,8 +1570,12 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, out_unmap: pte_unmap_unlock(pte, ptl); if (result == SCAN_SUCCEED) { - result = collapse_huge_page(mm, address, referenced, - unmapped, cc, mmap_locked, HPAGE_PMD_ORDER, 0); + result = khugepaged_scan_bitmap(mm, address, referenced, unmapped, cc, + mmap_locked, enabled_orders); + if (result > 0) + result = SCAN_SUCCEED; + else + result = SCAN_FAIL; } out: trace_mm_khugepaged_scan_pmd(mm, &folio->page, writable, referenced, @@ -2479,11 +2531,13 @@ static int khugepaged_collapse_single_pmd(unsigned long addr, fput(file); if (result == SCAN_PTE_MAPPED_HUGEPAGE) { mmap_read_lock(mm); + *mmap_locked = true; if (khugepaged_test_exit_or_disable(mm)) goto end; result = collapse_pte_mapped_thp(mm, addr, !cc->is_khugepaged); mmap_read_unlock(mm); + *mmap_locked = false; } } else { result = khugepaged_scan_pmd(mm, vma, addr, From patchwork Mon Apr 14 22:05:53 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nico Pache X-Patchwork-Id: 14051147 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 89D581F4616 for ; Mon, 14 Apr 2025 22:07:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744668470; cv=none; b=pGAAq/UH7fT2pJuInLCO0mmdZsOMqEd5esEzhpBJoBPhufJZ+5z5x4CH8Tfjk/Kg9fwzMi8epCwECD+RWFEvY0HBpndy4gCAo+oKQo8mLY9LAstFLyu0ivSNE1YJ/xURyRZKVfqPBBqT+k9BoOH7lPbiD90yGk9ij8qdJGeZy9k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744668470; c=relaxed/simple; bh=bPgr21ulKjEoJ/loz4RXwbJFTjY7azALjM2yyNEGvTc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=eHdZcIzFUFmQKNQgJawl/CnRjZk7pO401XwG2x33DH+ZDICQuWwOdkdC03XjHJn+H3O9pXqjESSyKiX6rvfHkjl4Jgz01nnl2iAUiMfkQLSR3vnBf6O/HB3qx0K3qDsJzo1fEhXC5z4dvT2VKEVh7vX2ACu0FCz22jZv9yYmkuk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=L3VU8UwJ; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="L3VU8UwJ" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1744668467; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=v2gY2PGxKXKwZ0C1exJKjC24fInNkJxDYn7iMPN2Aj4=; b=L3VU8UwJSZLD+u3MEhv7camdoWbNwfVCCu8AN5b8fw1Xrlsxd3V678QRTUfpZRbHzsyLp/ wh+L2XwSyQn21b/+WYG1cWZy4m58qMXLB4V4sdaXdgKfM/eellF1AkTmKimgbm7ddw63rv AZiPZ2+N0+jam71a05tSA5CL67lQD2U= Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-445-RHGgsGgfMuuoUEgjKTt33w-1; Mon, 14 Apr 2025 18:07:44 -0400 X-MC-Unique: RHGgsGgfMuuoUEgjKTt33w-1 X-Mimecast-MFC-AGG-ID: RHGgsGgfMuuoUEgjKTt33w_1744668460 Received: from mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.15]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 72E57180AF66; Mon, 14 Apr 2025 22:07:40 +0000 (UTC) Received: from h1.redhat.com (unknown [10.22.64.91]) by mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id AAD631955BC1; Mon, 14 Apr 2025 22:07:31 +0000 (UTC) From: Nico Pache To: linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: akpm@linux-foundation.org, corbet@lwn.net, rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, david@redhat.com, baohua@kernel.org, baolin.wang@linux.alibaba.com, ryan.roberts@arm.com, willy@infradead.org, peterx@redhat.com, ziy@nvidia.com, wangkefeng.wang@huawei.com, usamaarif642@gmail.com, sunnanyong@huawei.com, vishal.moola@gmail.com, thomas.hellstrom@linux.intel.com, yang@os.amperecomputing.com, kirill.shutemov@linux.intel.com, aarcange@redhat.com, raquini@redhat.com, dev.jain@arm.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, tiwai@suse.de, will@kernel.org, dave.hansen@linux.intel.com, jack@suse.cz, cl@gentwo.org, jglisse@google.com, surenb@google.com, zokeefe@google.com, hannes@cmpxchg.org, rientjes@google.com, mhocko@suse.com Subject: [PATCH v3 08/12] khugepaged: skip collapsing mTHP to smaller orders Date: Mon, 14 Apr 2025 16:05:53 -0600 Message-ID: <20250414220557.35388-9-npache@redhat.com> In-Reply-To: <20250414220557.35388-1-npache@redhat.com> References: <20250414220557.35388-1-npache@redhat.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.15 khugepaged may try to collapse a mTHP to a smaller mTHP, resulting in some pages being unmapped. Skip these cases until we have a way to check if its ok to collapse to a smaller mTHP size (like in the case of a partially mapped folio). This patch is inspired by Dev Jain's work on khugepaged mTHP support [1]. [1] https://lore.kernel.org/lkml/20241216165105.56185-11-dev.jain@arm.com/ Co-developed-by: Dev Jain Signed-off-by: Dev Jain Signed-off-by: Nico Pache --- mm/khugepaged.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index de09de5a3f28..114056276003 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -625,7 +625,12 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, folio = page_folio(page); VM_BUG_ON_FOLIO(!folio_test_anon(folio), folio); - /* See hpage_collapse_scan_pmd(). */ + if (order != HPAGE_PMD_ORDER && folio_order(folio) >= order) { + result = SCAN_PTE_MAPPED_HUGEPAGE; + goto out; + } + + /* See khugepaged_scan_pmd(). */ if (folio_maybe_mapped_shared(folio)) { ++shared; if (order != HPAGE_PMD_ORDER || (cc->is_khugepaged && From patchwork Mon Apr 14 22:05:54 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nico Pache X-Patchwork-Id: 14051149 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C6AA51F4629 for ; Mon, 14 Apr 2025 22:08:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744668485; cv=none; b=pOrdxXIenShlES3QkwF0xNyBh0xaftPE/SZ8mkzdeufkUadgC18GLWk+SFoaq8+wiy7WRm2ekdQW5qTWp3h6zXe2/UZUYnhu35WAp0rlpjZeMNZ0imeBIWosuZtONls3D/f3DAFws6Ksi3hkAheM3QPOwQ3wPJZsj9BG6YF7gx8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744668485; c=relaxed/simple; bh=y9tkopVlYYrgq0L9k6gT+6SkhL2M4+erB+aHqUmSUCo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XoqpM9sLPf9kuv7BVp038U9svgDJV69WJlcF9ct8C1lvmJ/BCL+eJZBAwjSd1AjXq0f/WkVqso/P0TBjYumj1qxfrCRkm6rbGv0SI6vmozzuezNleHvgSjKQaAI3Qhd6BrNcWqLBlywB27tlAfptY0PBE/55dIwoVi1X+1exwQM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=EUNINORH; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="EUNINORH" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1744668482; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ZcJZphRyHLZo+N8WDQs50obgYAPQYa0/jjuYLyo1WUI=; b=EUNINORHLfXolzR0Yxw8cZlERETLT/9HjzKGY4T6Z8++HopnSCYtbV5HlyaFrspbMf7MC5 gOWEOAUD4O9YAwrgy3+2XAU+fllC+AQz18U2RU9WDJSwZXoeP1tpEigH7Kvs6B7/XdNzAB M0Ba73Khwz/eSyreOUPoUcFiH/Z2kks= Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-270-UG3tPO-1Oq2w4uzgC6Xx-w-1; Mon, 14 Apr 2025 18:07:59 -0400 X-MC-Unique: UG3tPO-1Oq2w4uzgC6Xx-w-1 X-Mimecast-MFC-AGG-ID: UG3tPO-1Oq2w4uzgC6Xx-w_1744668468 Received: from mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.15]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 23D5F1801A07; Mon, 14 Apr 2025 22:07:48 +0000 (UTC) Received: from h1.redhat.com (unknown [10.22.64.91]) by mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id C1A701956094; Mon, 14 Apr 2025 22:07:40 +0000 (UTC) From: Nico Pache To: linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: akpm@linux-foundation.org, corbet@lwn.net, rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, david@redhat.com, baohua@kernel.org, baolin.wang@linux.alibaba.com, ryan.roberts@arm.com, willy@infradead.org, peterx@redhat.com, ziy@nvidia.com, wangkefeng.wang@huawei.com, usamaarif642@gmail.com, sunnanyong@huawei.com, vishal.moola@gmail.com, thomas.hellstrom@linux.intel.com, yang@os.amperecomputing.com, kirill.shutemov@linux.intel.com, aarcange@redhat.com, raquini@redhat.com, dev.jain@arm.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, tiwai@suse.de, will@kernel.org, dave.hansen@linux.intel.com, jack@suse.cz, cl@gentwo.org, jglisse@google.com, surenb@google.com, zokeefe@google.com, hannes@cmpxchg.org, rientjes@google.com, mhocko@suse.com Subject: [PATCH v3 09/12] khugepaged: avoid unnecessary mTHP collapse attempts Date: Mon, 14 Apr 2025 16:05:54 -0600 Message-ID: <20250414220557.35388-10-npache@redhat.com> In-Reply-To: <20250414220557.35388-1-npache@redhat.com> References: <20250414220557.35388-1-npache@redhat.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.15 There are cases where, if an attempted collapse fails, all subsequent orders are guaranteed to also fail. Avoid these collapse attempts by bailing out early. Signed-off-by: Nico Pache --- mm/khugepaged.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 114056276003..86e4c5100181 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1366,6 +1366,23 @@ static int khugepaged_scan_bitmap(struct mm_struct *mm, unsigned long address, collapsed += (1 << order); continue; } + /* + * Some ret values indicate all lower order will also + * fail, dont trying to collapse smaller orders + */ + if (ret == SCAN_EXCEED_NONE_PTE || + ret == SCAN_EXCEED_SWAP_PTE || + ret == SCAN_EXCEED_SHARED_PTE || + ret == SCAN_PTE_NON_PRESENT || + ret == SCAN_PTE_UFFD_WP || + ret == SCAN_ALLOC_HUGE_PAGE_FAIL || + ret == SCAN_CGROUP_CHARGE_FAIL || + ret == SCAN_COPY_MC || + ret == SCAN_PAGE_LOCK || + ret == SCAN_PAGE_COUNT) + goto next; + else + break; } next: From patchwork Mon Apr 14 22:05:55 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nico Pache X-Patchwork-Id: 14051148 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6EF301F30B3 for ; Mon, 14 Apr 2025 22:08:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744668484; cv=none; b=QCC0f0z1/Xb8YM5ANIgB208VHyVY15caN0HhO7Iq5pWjgLJB7TvbSq0EaM7NAjWv0BzijzJygc25loAM8Ki/6jcI2U3Og4JaLjNCDRsuLtg4zYV9UAJeERy8oLYopF8/IKUEThe7m3wNJzB0z48yVOtMDpmZGYT5+x+RsmGizaY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744668484; c=relaxed/simple; bh=U32gvDQqxMxtf3EOf1odIHiGbpMWxGFduMkZ40VJXrg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=aCCKNGJpLt1PBT8UF350jgtC8ygBkVQkr/4IOTD7dS6caQd9eYueELUM+VntiBrm3Nf5nFtpy3Ib0uaSg5KcWS0cuZW2f+Rb9zMWe05H5DUG/OBncFWC3f/Blb1VM31C0Db9weDpilRkZyO++k5d0HSQt6t0cwTmoAndkj/VBYc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=NMrHWO1B; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="NMrHWO1B" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1744668481; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=PYXrSAEt5l2U2Xh9JJaMnzvAc4JHOtXOLyT0D9PfDoI=; b=NMrHWO1Bjk1w43r5ToXobrLNttZ4HoB/mP43KOlOkHeu9smGxrrOAKjces/TUiHQopHsTZ kCENDqglvSH1hA00OoYLGJGo6aDv/LaU/5QitSjPErnJkFMwjjuxOJsPPKZWnhCq6WLoBk hXGx1Z1qoB12+sp8sS0WdbJbi//kkNo= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-347-iMkoo9zeMc6Ie018wIzbaA-1; Mon, 14 Apr 2025 18:08:00 -0400 X-MC-Unique: iMkoo9zeMc6Ie018wIzbaA-1 X-Mimecast-MFC-AGG-ID: iMkoo9zeMc6Ie018wIzbaA_1744668476 Received: from mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.15]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 0E8771956089; Mon, 14 Apr 2025 22:07:56 +0000 (UTC) Received: from h1.redhat.com (unknown [10.22.64.91]) by mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 729901955BC2; Mon, 14 Apr 2025 22:07:48 +0000 (UTC) From: Nico Pache To: linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: akpm@linux-foundation.org, corbet@lwn.net, rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, david@redhat.com, baohua@kernel.org, baolin.wang@linux.alibaba.com, ryan.roberts@arm.com, willy@infradead.org, peterx@redhat.com, ziy@nvidia.com, wangkefeng.wang@huawei.com, usamaarif642@gmail.com, sunnanyong@huawei.com, vishal.moola@gmail.com, thomas.hellstrom@linux.intel.com, yang@os.amperecomputing.com, kirill.shutemov@linux.intel.com, aarcange@redhat.com, raquini@redhat.com, dev.jain@arm.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, tiwai@suse.de, will@kernel.org, dave.hansen@linux.intel.com, jack@suse.cz, cl@gentwo.org, jglisse@google.com, surenb@google.com, zokeefe@google.com, hannes@cmpxchg.org, rientjes@google.com, mhocko@suse.com Subject: [PATCH v3 10/12] khugepaged: improve tracepoints for mTHP orders Date: Mon, 14 Apr 2025 16:05:55 -0600 Message-ID: <20250414220557.35388-11-npache@redhat.com> In-Reply-To: <20250414220557.35388-1-npache@redhat.com> References: <20250414220557.35388-1-npache@redhat.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.15 Add the order to the tracepoints to give better insight into what order is being operated at for khugepaged. Signed-off-by: Nico Pache --- include/trace/events/huge_memory.h | 34 +++++++++++++++++++----------- mm/khugepaged.c | 10 +++++---- 2 files changed, 28 insertions(+), 16 deletions(-) diff --git a/include/trace/events/huge_memory.h b/include/trace/events/huge_memory.h index 9d5c00b0285c..ea2fe20a39f5 100644 --- a/include/trace/events/huge_memory.h +++ b/include/trace/events/huge_memory.h @@ -92,34 +92,37 @@ TRACE_EVENT(mm_khugepaged_scan_pmd, TRACE_EVENT(mm_collapse_huge_page, - TP_PROTO(struct mm_struct *mm, int isolated, int status), + TP_PROTO(struct mm_struct *mm, int isolated, int status, int order), - TP_ARGS(mm, isolated, status), + TP_ARGS(mm, isolated, status, order), TP_STRUCT__entry( __field(struct mm_struct *, mm) __field(int, isolated) __field(int, status) + __field(int, order) ), TP_fast_assign( __entry->mm = mm; __entry->isolated = isolated; __entry->status = status; + __entry->order = order; ), - TP_printk("mm=%p, isolated=%d, status=%s", + TP_printk("mm=%p, isolated=%d, status=%s order=%d", __entry->mm, __entry->isolated, - __print_symbolic(__entry->status, SCAN_STATUS)) + __print_symbolic(__entry->status, SCAN_STATUS), + __entry->order) ); TRACE_EVENT(mm_collapse_huge_page_isolate, TP_PROTO(struct page *page, int none_or_zero, - int referenced, bool writable, int status), + int referenced, bool writable, int status, int order), - TP_ARGS(page, none_or_zero, referenced, writable, status), + TP_ARGS(page, none_or_zero, referenced, writable, status, order), TP_STRUCT__entry( __field(unsigned long, pfn) @@ -127,6 +130,7 @@ TRACE_EVENT(mm_collapse_huge_page_isolate, __field(int, referenced) __field(bool, writable) __field(int, status) + __field(int, order) ), TP_fast_assign( @@ -135,27 +139,31 @@ TRACE_EVENT(mm_collapse_huge_page_isolate, __entry->referenced = referenced; __entry->writable = writable; __entry->status = status; + __entry->order = order; ), - TP_printk("scan_pfn=0x%lx, none_or_zero=%d, referenced=%d, writable=%d, status=%s", + TP_printk("scan_pfn=0x%lx, none_or_zero=%d, referenced=%d, writable=%d, status=%s order=%d", __entry->pfn, __entry->none_or_zero, __entry->referenced, __entry->writable, - __print_symbolic(__entry->status, SCAN_STATUS)) + __print_symbolic(__entry->status, SCAN_STATUS), + __entry->order) ); TRACE_EVENT(mm_collapse_huge_page_swapin, - TP_PROTO(struct mm_struct *mm, int swapped_in, int referenced, int ret), + TP_PROTO(struct mm_struct *mm, int swapped_in, int referenced, int ret, + int order), - TP_ARGS(mm, swapped_in, referenced, ret), + TP_ARGS(mm, swapped_in, referenced, ret, order), TP_STRUCT__entry( __field(struct mm_struct *, mm) __field(int, swapped_in) __field(int, referenced) __field(int, ret) + __field(int, order) ), TP_fast_assign( @@ -163,13 +171,15 @@ TRACE_EVENT(mm_collapse_huge_page_swapin, __entry->swapped_in = swapped_in; __entry->referenced = referenced; __entry->ret = ret; + __entry->order = order; ), - TP_printk("mm=%p, swapped_in=%d, referenced=%d, ret=%d", + TP_printk("mm=%p, swapped_in=%d, referenced=%d, ret=%d, order=%d", __entry->mm, __entry->swapped_in, __entry->referenced, - __entry->ret) + __entry->ret, + __entry->order) ); TRACE_EVENT(mm_khugepaged_scan_file, diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 86e4c5100181..8cbde2ae0112 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -721,13 +721,14 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, } else { result = SCAN_SUCCEED; trace_mm_collapse_huge_page_isolate(&folio->page, none_or_zero, - referenced, writable, result); + referenced, writable, result, + order); return result; } out: release_pte_pages(pte, _pte, compound_pagelist); trace_mm_collapse_huge_page_isolate(&folio->page, none_or_zero, - referenced, writable, result); + referenced, writable, result, order); return result; } @@ -1097,7 +1098,8 @@ static int __collapse_huge_page_swapin(struct mm_struct *mm, result = SCAN_SUCCEED; out: - trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, result); + trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, result, + order); return result; } @@ -1318,7 +1320,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, *mmap_locked = false; if (folio) folio_put(folio); - trace_mm_collapse_huge_page(mm, result == SCAN_SUCCEED, result); + trace_mm_collapse_huge_page(mm, result == SCAN_SUCCEED, result, order); return result; } From patchwork Mon Apr 14 22:05:56 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nico Pache X-Patchwork-Id: 14051150 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AA4761F8748 for ; Mon, 14 Apr 2025 22:08:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744668496; cv=none; b=XCCVJ6bv+FLvxxn7zuAxF+agGhNODEXbWOT7LHLYtjUnLoRq0690sWWIdf6DUi+ctJ+Qkd0TVWIVYeFmulPuTgQYgAI6ypAPCAC8CZzKn08ht4/5D7eJCCX+6ylExt7qcy1w8/JkFLbeXI6w6UT/AbrnuPT6T7qCW6F78+tE2uY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744668496; c=relaxed/simple; bh=gOYQ3XRbHOQQ3MYoMLIQrLMfehtNVdwBrNhQfVPp+G0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=p1HdT4DvM6eETdDdyJUGAgkDlfrtxZFhBk5G6/x3npN15T65MLMcaTkSES1jIvQ5TW9kGh5EeWDQl/mmO0tkp9ijfhurQ19JWtJ+j2A+Bj1ojgQG7+J6Uy0bwDxHMvOuVLUcVSEPOuRghvC0Jne1+ZIsRC01Cn7+Npuz8+5CVQ4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Xd3WTeOo; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Xd3WTeOo" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1744668493; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=08+mOgKQbYDsfFxRpx7aKoRi3ylvPpVEDruY3blZyQE=; b=Xd3WTeOooBAGhayyxFwZHi5yqQ9idhnp0yQaeULVJkRlF5sC4YfHXLQ1B7HUeNgayp7CJZ 6rIHsDGw6QftTSGvwUAKiflrXrFEsnTXF42EW4CqB2FnmCC1dFUpFg6v3AQQC0J90iCMwg OxqziqpukJh5pGEwPoW7rM8UEeaPnRk= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-639-4jUf0svFOeK4-ka9ElZDjA-1; Mon, 14 Apr 2025 18:08:08 -0400 X-MC-Unique: 4jUf0svFOeK4-ka9ElZDjA-1 X-Mimecast-MFC-AGG-ID: 4jUf0svFOeK4-ka9ElZDjA_1744668484 Received: from mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.15]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 42DD01956089; Mon, 14 Apr 2025 22:08:04 +0000 (UTC) Received: from h1.redhat.com (unknown [10.22.64.91]) by mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 5CB631956094; Mon, 14 Apr 2025 22:07:56 +0000 (UTC) From: Nico Pache To: linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: akpm@linux-foundation.org, corbet@lwn.net, rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, david@redhat.com, baohua@kernel.org, baolin.wang@linux.alibaba.com, ryan.roberts@arm.com, willy@infradead.org, peterx@redhat.com, ziy@nvidia.com, wangkefeng.wang@huawei.com, usamaarif642@gmail.com, sunnanyong@huawei.com, vishal.moola@gmail.com, thomas.hellstrom@linux.intel.com, yang@os.amperecomputing.com, kirill.shutemov@linux.intel.com, aarcange@redhat.com, raquini@redhat.com, dev.jain@arm.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, tiwai@suse.de, will@kernel.org, dave.hansen@linux.intel.com, jack@suse.cz, cl@gentwo.org, jglisse@google.com, surenb@google.com, zokeefe@google.com, hannes@cmpxchg.org, rientjes@google.com, mhocko@suse.com Subject: [PATCH v3 11/12] khugepaged: add per-order mTHP khugepaged stats Date: Mon, 14 Apr 2025 16:05:56 -0600 Message-ID: <20250414220557.35388-12-npache@redhat.com> In-Reply-To: <20250414220557.35388-1-npache@redhat.com> References: <20250414220557.35388-1-npache@redhat.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.15 With mTHP support inplace, let add the per-order mTHP stats for exceeding NONE, SWAP, and SHARED. Signed-off-by: Nico Pache --- include/linux/huge_mm.h | 3 +++ mm/huge_memory.c | 7 +++++++ mm/khugepaged.c | 16 +++++++++++++--- 3 files changed, 23 insertions(+), 3 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 2acd9c1fc558..95be9d266eff 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -139,6 +139,9 @@ enum mthp_stat_item { MTHP_STAT_SPLIT_DEFERRED, MTHP_STAT_NR_ANON, MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, + MTHP_STAT_COLLAPSE_EXCEED_SWAP, + MTHP_STAT_COLLAPSE_EXCEED_NONE, + MTHP_STAT_COLLAPSE_EXCEED_SHARED, __MTHP_STAT_COUNT }; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 6db73d2d0962..35515bff392b 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -633,6 +633,10 @@ DEFINE_MTHP_STAT_ATTR(split_failed, MTHP_STAT_SPLIT_FAILED); DEFINE_MTHP_STAT_ATTR(split_deferred, MTHP_STAT_SPLIT_DEFERRED); DEFINE_MTHP_STAT_ATTR(nr_anon, MTHP_STAT_NR_ANON); DEFINE_MTHP_STAT_ATTR(nr_anon_partially_mapped, MTHP_STAT_NR_ANON_PARTIALLY_MAPPED); +DEFINE_MTHP_STAT_ATTR(collapse_exceed_swap_pte, MTHP_STAT_COLLAPSE_EXCEED_SWAP); +DEFINE_MTHP_STAT_ATTR(collapse_exceed_none_pte, MTHP_STAT_COLLAPSE_EXCEED_NONE); +DEFINE_MTHP_STAT_ATTR(collapse_exceed_shared_pte, MTHP_STAT_COLLAPSE_EXCEED_SHARED); + static struct attribute *anon_stats_attrs[] = { &anon_fault_alloc_attr.attr, @@ -649,6 +653,9 @@ static struct attribute *anon_stats_attrs[] = { &split_deferred_attr.attr, &nr_anon_attr.attr, &nr_anon_partially_mapped_attr.attr, + &collapse_exceed_swap_pte_attr.attr, + &collapse_exceed_none_pte_attr.attr, + &collapse_exceed_shared_pte_attr.attr, NULL, }; diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 8cbde2ae0112..d35abae76c93 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -604,7 +604,10 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, continue; } else { result = SCAN_EXCEED_NONE_PTE; - count_vm_event(THP_SCAN_EXCEED_NONE_PTE); + if (order == HPAGE_PMD_ORDER) + count_vm_event(THP_SCAN_EXCEED_NONE_PTE); + else + count_mthp_stat(order, MTHP_STAT_COLLAPSE_EXCEED_NONE); goto out; } } @@ -633,8 +636,14 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, /* See khugepaged_scan_pmd(). */ if (folio_maybe_mapped_shared(folio)) { ++shared; - if (order != HPAGE_PMD_ORDER || (cc->is_khugepaged && - shared > khugepaged_max_ptes_shared)) { + if (order != HPAGE_PMD_ORDER) { + result = SCAN_EXCEED_SHARED_PTE; + count_mthp_stat(order, MTHP_STAT_COLLAPSE_EXCEED_SHARED); + goto out; + } + + if (cc->is_khugepaged && + shared > khugepaged_max_ptes_shared) { result = SCAN_EXCEED_SHARED_PTE; count_vm_event(THP_SCAN_EXCEED_SHARED_PTE); goto out; @@ -1060,6 +1069,7 @@ static int __collapse_huge_page_swapin(struct mm_struct *mm, /* Dont swapin for mTHP collapse */ if (order != HPAGE_PMD_ORDER) { + count_mthp_stat(order, MTHP_STAT_COLLAPSE_EXCEED_SHARED); result = SCAN_EXCEED_SWAP_PTE; goto out; } From patchwork Mon Apr 14 22:05:57 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nico Pache X-Patchwork-Id: 14051151 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 18FE81F1927 for ; Mon, 14 Apr 2025 22:08:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744668505; cv=none; b=nmqb1MCSeZGAonqYVKsEAp1Ri1dVJE+bezm3cWX+daD8oBwnqENGrNBGJoC2S+qZBkb+Ek5KzUNWU0Rz9z/TuSDxgUiesFBPYG+czJL1ADaJG2jJQCsHwUlx13TVuRxcfCin0kzjkxrFOfi4aeJl9sFiEuTgG4zzMfKZHnHaTwY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744668505; c=relaxed/simple; bh=ZQqDjpLvAa6SMF2b8rUqqfvpCFXxaImQ/0oDxePwhQc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Zb3QOZv/eqy7o24BQC1r8nmPcKHfcp6I8h9CYrB+LVMHelW14z8ZCHfsSAEiIU4vtgbj9KGkQZ5b90l2/X8DR0lQtyE0AUo5y2jhDqtkjwd5BWnODI4Gq+WY2/LTeCFtALSKLe9cJ4OdbKQW0/+exSsxJyN8K2p1wxbDdKe8auE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=hMlabERP; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="hMlabERP" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1744668502; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Y7UZe2oIzsDa6S5L28Z8pby9GjFeNNywMq4TlSe4zv0=; b=hMlabERPlm/fmmYJI5hCQj+jf+8jULdYBgn6ul5lhMq5IVcUr6xxtbwa6YW988m8rP6IbL 4KeeacH2EBLyQTjEDnsU58F5RwBQR2rwjzXvwXbycfc15c9Cd5QCgweyBasOgQdU/QVwfF fAgOYkuW8CM9V021WudATE4h8Lr+tg0= Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-299-Ofd_pkTNOEOBqw--4P-yjQ-1; Mon, 14 Apr 2025 18:08:16 -0400 X-MC-Unique: Ofd_pkTNOEOBqw--4P-yjQ-1 X-Mimecast-MFC-AGG-ID: Ofd_pkTNOEOBqw--4P-yjQ_1744668493 Received: from mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.15]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 4A7141955DC6; Mon, 14 Apr 2025 22:08:12 +0000 (UTC) Received: from h1.redhat.com (unknown [10.22.64.91]) by mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 9187A1956094; Mon, 14 Apr 2025 22:08:04 +0000 (UTC) From: Nico Pache To: linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: akpm@linux-foundation.org, corbet@lwn.net, rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, david@redhat.com, baohua@kernel.org, baolin.wang@linux.alibaba.com, ryan.roberts@arm.com, willy@infradead.org, peterx@redhat.com, ziy@nvidia.com, wangkefeng.wang@huawei.com, usamaarif642@gmail.com, sunnanyong@huawei.com, vishal.moola@gmail.com, thomas.hellstrom@linux.intel.com, yang@os.amperecomputing.com, kirill.shutemov@linux.intel.com, aarcange@redhat.com, raquini@redhat.com, dev.jain@arm.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, tiwai@suse.de, will@kernel.org, dave.hansen@linux.intel.com, jack@suse.cz, cl@gentwo.org, jglisse@google.com, surenb@google.com, zokeefe@google.com, hannes@cmpxchg.org, rientjes@google.com, mhocko@suse.com Subject: [PATCH v3 12/12] Documentation: mm: update the admin guide for mTHP collapse Date: Mon, 14 Apr 2025 16:05:57 -0600 Message-ID: <20250414220557.35388-13-npache@redhat.com> In-Reply-To: <20250414220557.35388-1-npache@redhat.com> References: <20250414220557.35388-1-npache@redhat.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.15 Now that we can collapse to mTHPs lets update the admin guide to reflect these changes and provide proper guidence on how to utilize it. Signed-off-by: Nico Pache --- Documentation/admin-guide/mm/transhuge.rst | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst index dff8d5985f0f..f0d4e78cedaa 100644 --- a/Documentation/admin-guide/mm/transhuge.rst +++ b/Documentation/admin-guide/mm/transhuge.rst @@ -63,7 +63,7 @@ often. THP can be enabled system wide or restricted to certain tasks or even memory ranges inside task's address space. Unless THP is completely disabled, there is ``khugepaged`` daemon that scans memory and -collapses sequences of basic pages into PMD-sized huge pages. +collapses sequences of basic pages into huge pages. The THP behaviour is controlled via :ref:`sysfs ` interface and using madvise(2) and prctl(2) system calls. @@ -144,6 +144,13 @@ hugepage sizes have enabled="never". If enabling multiple hugepage sizes, the kernel will select the most appropriate enabled size for a given allocation. +khugepaged uses max_ptes_none scaled to the order of the enabled mTHP size to +determine collapses. When using mTHPs its recommended to set max_ptes_none low. +Ideally less than HPAGE_PMD_NR / 2 (255 on 4k page size). This will prevent +undesired "creep" behavior that leads to continuously collapsing to a larger +mTHP size. max_ptes_shared and max_ptes_swap have no effect when collapsing to a +mTHP, and mTHP collapse will fail on shared or swapped out pages. + It's also possible to limit defrag efforts in the VM to generate anonymous hugepages in case they're not immediately free to madvise regions or to never try to defrag memory and simply fallback to regular