From patchwork Thu Oct 28 11:56:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ning Zhang X-Patchwork-Id: 12589939 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7BA12C433EF for ; Thu, 28 Oct 2021 11:57:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 30CA7604DA for ; Thu, 28 Oct 2021 11:57:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 30CA7604DA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id F0CA894000C; Thu, 28 Oct 2021 07:57:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DCE1F94000A; Thu, 28 Oct 2021 07:57:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C6F1294000C; Thu, 28 Oct 2021 07:57:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0179.hostedemail.com [216.40.44.179]) by kanga.kvack.org (Postfix) with ESMTP id A117394000A for ; Thu, 28 Oct 2021 07:57:04 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 1B3C22D38D for ; Thu, 28 Oct 2021 11:57:04 +0000 (UTC) X-FDA: 78745695168.03.0146F1F Received: from out30-45.freemail.mail.aliyun.com (out30-45.freemail.mail.aliyun.com [115.124.30.45]) by imf23.hostedemail.com (Postfix) with ESMTP id 98A56900038F for ; Thu, 28 Oct 2021 11:56:54 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R151e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04423;MF=ningzhang@linux.alibaba.com;NM=1;PH=DS;RN=6;SR=0;TI=SMTPD_---0Uu0GVSh_1635422217; Received: from localhost(mailfrom:ningzhang@linux.alibaba.com fp:SMTPD_---0Uu0GVSh_1635422217) by smtp.aliyun-inc.com(127.0.0.1); Thu, 28 Oct 2021 19:56:58 +0800 From: Ning Zhang To: linux-mm@kvack.org Cc: Andrew Morton , Johannes Weiner , Michal Hocko , Vladimir Davydov , Yu Zhao Subject: [RFC 5/6] mm, thp: add some statistics for zero subpages reclaim Date: Thu, 28 Oct 2021 19:56:54 +0800 Message-Id: <1635422215-99394-6-git-send-email-ningzhang@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1635422215-99394-1-git-send-email-ningzhang@linux.alibaba.com> References: <1635422215-99394-1-git-send-email-ningzhang@linux.alibaba.com> X-Stat-Signature: 6me8go4tujkkti1iumdu17zktjm46k3x X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 98A56900038F Authentication-Results: imf23.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf23.hostedemail.com: domain of ningzhang@linux.alibaba.com designates 115.124.30.45 as permitted sender) smtp.mailfrom=ningzhang@linux.alibaba.com X-HE-Tag: 1635422214-250793 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: queue_length show the numbers of huge pages in the queue. split_hpage shows the numbers of huge pages split by thp reclaim. split_failed shows the numbers of huge pages split failed reclaim_subpage shows the numbers of zero subpages reclaimed by thp reclaim. Signed-off-by: Ning Zhang --- include/linux/huge_mm.h | 3 ++- include/linux/mmzone.h | 3 +++ mm/huge_memory.c | 8 ++++++-- mm/memcontrol.c | 47 +++++++++++++++++++++++++++++++++++++++++++++++ mm/vmscan.c | 2 +- 5 files changed, 59 insertions(+), 4 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index f792433..5d4a038 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -189,7 +189,8 @@ unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr, extern int global_thp_reclaim; int zsr_get_hpage(struct hpage_reclaim *hr_queue, struct page **reclaim_page, int threshold); -unsigned long zsr_reclaim_hpage(struct lruvec *lruvec, struct page *page); +unsigned long zsr_reclaim_hpage(struct hpage_reclaim *hr_queue, + struct lruvec *lruvec, struct page *page); void zsr_reclaim_memcg(struct mem_cgroup *memcg); static inline struct list_head *hpage_reclaim_list(struct page *page) { diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 222cd4f..6ce6890 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -792,6 +792,9 @@ struct hpage_reclaim { spinlock_t reclaim_queue_lock; struct list_head reclaim_queue; unsigned long reclaim_queue_len; + atomic_long_t split_hpage; + atomic_long_t split_failed; + atomic_long_t reclaim_subpage; }; #endif diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 633fd0f..5e737d0 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3506,7 +3506,8 @@ int zsr_get_hpage(struct hpage_reclaim *hr_queue, struct page **reclaim_page, } -unsigned long zsr_reclaim_hpage(struct lruvec *lruvec, struct page *page) +unsigned long zsr_reclaim_hpage(struct hpage_reclaim *hr_queue, + struct lruvec *lruvec, struct page *page) { struct pglist_data *pgdat = page_pgdat(page); unsigned long reclaimed; @@ -3523,12 +3524,15 @@ unsigned long zsr_reclaim_hpage(struct lruvec *lruvec, struct page *page) putback_lru_page(page); mod_node_page_state(pgdat, NR_ISOLATED_ANON, -HPAGE_PMD_NR); + atomic_long_inc(&hr_queue->split_failed); return 0; } unlock_page(page); list_add_tail(&page->lru, &split_list); reclaimed = reclaim_zero_subpages(&split_list, &keep_list); + atomic_long_inc(&hr_queue->split_hpage); + atomic_long_add(reclaimed, &hr_queue->reclaim_subpage); spin_lock_irqsave(&lruvec->lru_lock, flags); move_pages_to_lru(lruvec, &keep_list); @@ -3564,7 +3568,7 @@ void zsr_reclaim_memcg(struct mem_cgroup *memcg) if (!page) continue; - zsr_reclaim_hpage(lruvec, page); + zsr_reclaim_hpage(hr_queue, lruvec, page); cond_resched(); } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index a8e3ca1..f8016ba 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -4580,6 +4580,49 @@ static ssize_t memcg_thp_reclaim_ctrl_write(struct kernfs_open_file *of, return nbytes; } + +static int memcg_thp_reclaim_stat_show(struct seq_file *m, void *v) +{ + struct mem_cgroup *memcg = mem_cgroup_from_css(seq_css(m)); + struct mem_cgroup_per_node *mz; + int nid; + unsigned long len; + + seq_puts(m, "queue_length\t"); + for_each_node(nid) { + mz = memcg->nodeinfo[nid]; + len = READ_ONCE(mz->hpage_reclaim_queue.reclaim_queue_len); + seq_printf(m, "%-24lu", len); + } + + seq_puts(m, "\n"); + seq_puts(m, "split_hpage\t"); + for_each_node(nid) { + mz = memcg->nodeinfo[nid]; + len = atomic_long_read(&mz->hpage_reclaim_queue.split_hpage); + seq_printf(m, "%-24lu", len); + } + + seq_puts(m, "\n"); + seq_puts(m, "split_failed\t"); + for_each_node(nid) { + mz = memcg->nodeinfo[nid]; + len = atomic_long_read(&mz->hpage_reclaim_queue.split_failed); + seq_printf(m, "%-24lu", len); + } + + seq_puts(m, "\n"); + seq_puts(m, "reclaim_subpage\t"); + for_each_node(nid) { + mz = memcg->nodeinfo[nid]; + len = atomic_long_read(&mz->hpage_reclaim_queue.reclaim_subpage); + seq_printf(m, "%-24lu", len); + } + + seq_puts(m, "\n"); + + return 0; +} #endif #ifdef CONFIG_CGROUP_WRITEBACK @@ -5155,6 +5198,10 @@ static ssize_t memcg_write_event_control(struct kernfs_open_file *of, .seq_show = memcg_thp_reclaim_ctrl_show, .write = memcg_thp_reclaim_ctrl_write, }, + { + .name = "thp_reclaim_stat", + .seq_show = memcg_thp_reclaim_stat_show, + }, #endif { }, /* terminate */ }; diff --git a/mm/vmscan.c b/mm/vmscan.c index fcc80a6..cb5f53d 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2818,7 +2818,7 @@ static unsigned long reclaim_hpage_zero_subpages(struct lruvec *lruvec, if (!page) continue; - nr_reclaimed += zsr_reclaim_hpage(lruvec, page); + nr_reclaimed += zsr_reclaim_hpage(hr_queue, lruvec, page); cond_resched();