From patchwork Wed Oct 26 18:01:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Weiner X-Patchwork-Id: 13020938 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F5B0C38A2D for ; Wed, 26 Oct 2022 18:01:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9BA3C8E0002; Wed, 26 Oct 2022 14:01:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 943128E0001; Wed, 26 Oct 2022 14:01:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 796A78E0002; Wed, 26 Oct 2022 14:01:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 647038E0001 for ; Wed, 26 Oct 2022 14:01:36 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id F23C31C6A0B for ; Wed, 26 Oct 2022 18:01:35 +0000 (UTC) X-FDA: 80063868150.23.A933059 Received: from mail-qt1-f170.google.com (mail-qt1-f170.google.com [209.85.160.170]) by imf09.hostedemail.com (Postfix) with ESMTP id 0172D14004E for ; Wed, 26 Oct 2022 18:01:34 +0000 (UTC) Received: by mail-qt1-f170.google.com with SMTP id c23so10519193qtw.8 for ; Wed, 26 Oct 2022 11:01:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=klV8RzaV/QSYOpMVWproVjBI7E4u+D4eP4WiGrNP5MI=; b=a+3V2M2GT9OhnUH5fcX//TLSqzcIWrTfhAjqGnSnLGAhiZxpP03UUVy8DHw60QRVsA aYg6ggXSSoP7nFwzKmNBD5reGVj8GvmKwvL1N96vBuUapQ+j7NNXcqaIfooxUrc8KJQg GRD5WjESJBhCc7qCbkJTYWHgE1NgoPzd1datHPvLUFU3sduEpGuDDTQjPNC0C+KpAx/g 9LRiTbp4rZkO90C/wTqBcEMTa64qLQGCxZNHn7o3VBF3E1uGIJTKZ+C/md/hJ0U8u0fD ih8s5dcwgrWsu0PHQGASPWaGm+xt9sHBGJM3t/LDoTLM/NL953U6qxWB3qcWp8uvNnGn bMzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=klV8RzaV/QSYOpMVWproVjBI7E4u+D4eP4WiGrNP5MI=; b=njYdlWqg2VU5/gz66QGJpXXdn5rADmkNkDyGv9OUpcwOhzq7JBJAhQf0ipZkxZJeQ2 LpQzMPwlA8RsDHZKb2zAkRNmx7kGyOiCxix8+wPTNrtkUSnM/g+xft6BqY2fBB9WM2bQ 5cwfY4hfH3Jve+wl0zog8QRw6ORsz9e4WaOpWWq2Euy8Fhk4KXB5g4/fR4E5SiStf4ai ykvHLmTE4t/wGtCctEc2uutSE2BV6p9ptmINrnYio+U6JijLjZGs4gScGNS+ccVGviTq qhylHK7r+/7lTSPxZrQ/T8b1pYX7kw2PGEYpBsHuqRS//k81frvF1tJ3A9XQhHKxZ/+l Q8AA== X-Gm-Message-State: ACrzQf237O186rJK9H8UivlJJmnaapiVz0f6KsRQHcHBZU9iANNdA33f zdb4/R/q3YLrSp4afGtIu9UAxQ== X-Google-Smtp-Source: AMsMyM7bAtNmx1jsUmP6nbHmg+jOd/twPQEEnuSbvM6tfX1QoSCCSyFdU1hNDwhPMsuudfb6GyoO/Q== X-Received: by 2002:a05:622a:64e:b0:39c:f167:a5f6 with SMTP id a14-20020a05622a064e00b0039cf167a5f6mr37366597qtb.430.1666807293601; Wed, 26 Oct 2022 11:01:33 -0700 (PDT) Received: from localhost ([2620:10d:c091:480::25f1]) by smtp.gmail.com with ESMTPSA id cn13-20020a05622a248d00b003a4cda52c95sm2169952qtb.63.2022.10.26.11.01.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 26 Oct 2022 11:01:33 -0700 (PDT) From: Johannes Weiner To: Andrew Morton Cc: Matthew Wilcox , Yang Shi , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Eric Bergen Subject: [PATCH v2] mm: vmscan: split khugepaged stats from direct reclaim stats Date: Wed, 26 Oct 2022 14:01:33 -0400 Message-Id: <20221026180133.377671-1-hannes@cmpxchg.org> X-Mailer: git-send-email 2.38.1 MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1666807295; a=rsa-sha256; cv=none; b=eZ6LMByt5QSgaeKhyWoiPNdnEjLzznwmVwlSCpFhvyLTWf6vyenUdVhqh0h1bDjJPPzv5X wKo0fgzrPMgmAaTDGSfaYxARXzSJn+M3KHC1B6GR9GsPN3dJg83028BGzrUnsNiI6PB34o HbnY/yuf33nVW6oYR7npd0FuYaoSL68= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=cmpxchg-org.20210112.gappssmtp.com header.s=20210112 header.b=a+3V2M2G; dmarc=pass (policy=none) header.from=cmpxchg.org; spf=pass (imf09.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.160.170 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1666807295; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=klV8RzaV/QSYOpMVWproVjBI7E4u+D4eP4WiGrNP5MI=; b=VlWOdk2UMztFn5ORchkQT2yDIN7a03aJqRE6rlN3bKZ7XNR8hYbYKkK0X41ikV96V4rWrz CzSFE9ae7+Nc4iyTS+XpR7TvxzOotOCSXBCjq+wvkLUSXqtRo+A9N7NH8PjAQM6nNb65GZ ghbKxJKRcPG5XoR+zZrizQR0osEZIt0= X-Stat-Signature: 9z4wyr7hcfhstx86k9ao5pgwop3t8rn9 X-Rspam-User: X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 0172D14004E Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=cmpxchg-org.20210112.gappssmtp.com header.s=20210112 header.b=a+3V2M2G; dmarc=pass (policy=none) header.from=cmpxchg.org; spf=pass (imf09.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.160.170 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org X-HE-Tag: 1666807294-556225 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Direct reclaim stats are useful for identifying a potential source for application latency, as well as spotting issues with kswapd. However, khugepaged currently distorts the picture: as a kernel thread it doesn't impose allocation latencies on userspace, and it explicitly opts out of kswapd reclaim. Its activity showing up in the direct reclaim stats is misleading. Counting it as kswapd reclaim could also cause confusion when trying to understand actual kswapd behavior. Break out khugepaged from the direct reclaim counters into new pgsteal_khugepaged, pgdemote_khugepaged, pgscan_khugepaged counters. Test with a huge executable (CONFIG_READ_ONLY_THP_FOR_FS): pgsteal_kswapd 1342185 pgsteal_direct 0 pgsteal_khugepaged 3623 pgscan_kswapd 1345025 pgscan_direct 0 pgscan_khugepaged 3623 Reported-by: Eric Bergen Signed-off-by: Johannes Weiner --- Documentation/admin-guide/cgroup-v2.rst | 6 +++++ include/linux/khugepaged.h | 6 +++++ include/linux/vm_event_item.h | 3 +++ mm/khugepaged.c | 5 ++++ mm/memcontrol.c | 8 +++++-- mm/vmscan.c | 32 ++++++++++++++++++------- mm/vmstat.c | 3 +++ 7 files changed, 53 insertions(+), 10 deletions(-) v2: reclaimer_offset(): magic -> muggle (Willy) diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst index dc254a3cb956..74cec76be9f2 100644 --- a/Documentation/admin-guide/cgroup-v2.rst +++ b/Documentation/admin-guide/cgroup-v2.rst @@ -1488,12 +1488,18 @@ PAGE_SIZE multiple when read back. pgscan_direct (npn) Amount of scanned pages directly (in an inactive LRU list) + pgscan_khugepaged (npn) + Amount of scanned pages by khugepaged (in an inactive LRU list) + pgsteal_kswapd (npn) Amount of reclaimed pages by kswapd pgsteal_direct (npn) Amount of reclaimed pages directly + pgsteal_khugepaged (npn) + Amount of reclaimed pages by khugepaged + pgfault (npn) Total number of page faults incurred diff --git a/include/linux/khugepaged.h b/include/linux/khugepaged.h index 70162d707caf..f68865e19b0b 100644 --- a/include/linux/khugepaged.h +++ b/include/linux/khugepaged.h @@ -15,6 +15,7 @@ extern void __khugepaged_exit(struct mm_struct *mm); extern void khugepaged_enter_vma(struct vm_area_struct *vma, unsigned long vm_flags); extern void khugepaged_min_free_kbytes_update(void); +extern bool current_is_khugepaged(void); #ifdef CONFIG_SHMEM extern int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr, bool install_pmd); @@ -57,6 +58,11 @@ static inline int collapse_pte_mapped_thp(struct mm_struct *mm, static inline void khugepaged_min_free_kbytes_update(void) { } + +static inline bool current_is_khugepaged(void) +{ + return false; +} #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ #endif /* _LINUX_KHUGEPAGED_H */ diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h index 3518dba1e02f..7f5d1caf5890 100644 --- a/include/linux/vm_event_item.h +++ b/include/linux/vm_event_item.h @@ -40,10 +40,13 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT, PGREUSE, PGSTEAL_KSWAPD, PGSTEAL_DIRECT, + PGSTEAL_KHUGEPAGED, PGDEMOTE_KSWAPD, PGDEMOTE_DIRECT, + PGDEMOTE_KHUGEPAGED, PGSCAN_KSWAPD, PGSCAN_DIRECT, + PGSCAN_KHUGEPAGED, PGSCAN_DIRECT_THROTTLE, PGSCAN_ANON, PGSCAN_FILE, diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 4734315f7940..36318ebbf50d 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -2528,6 +2528,11 @@ void khugepaged_min_free_kbytes_update(void) mutex_unlock(&khugepaged_mutex); } +bool current_is_khugepaged(void) +{ + return kthread_func(current) == khugepaged; +} + static int madvise_collapse_errno(enum scan_result r) { /* diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 2d8549ae1b30..a17a5cfa6a55 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -661,8 +661,10 @@ static const unsigned int memcg_vm_event_stat[] = { PGPGOUT, PGSCAN_KSWAPD, PGSCAN_DIRECT, + PGSCAN_KHUGEPAGED, PGSTEAL_KSWAPD, PGSTEAL_DIRECT, + PGSTEAL_KHUGEPAGED, PGFAULT, PGMAJFAULT, PGREFILL, @@ -1574,10 +1576,12 @@ static void memory_stat_format(struct mem_cgroup *memcg, char *buf, int bufsize) /* Accumulated memory events */ seq_buf_printf(&s, "pgscan %lu\n", memcg_events(memcg, PGSCAN_KSWAPD) + - memcg_events(memcg, PGSCAN_DIRECT)); + memcg_events(memcg, PGSCAN_DIRECT) + + memcg_events(memcg, PGSCAN_KHUGEPAGED)); seq_buf_printf(&s, "pgsteal %lu\n", memcg_events(memcg, PGSTEAL_KSWAPD) + - memcg_events(memcg, PGSTEAL_DIRECT)); + memcg_events(memcg, PGSTEAL_DIRECT) + + memcg_events(memcg, PGSTEAL_KHUGEPAGED)); for (i = 0; i < ARRAY_SIZE(memcg_vm_event_stat); i++) { if (memcg_vm_event_stat[i] == PGPGIN || diff --git a/mm/vmscan.c b/mm/vmscan.c index 04d8b88e5216..a54d567c5e66 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -54,6 +54,7 @@ #include #include #include +#include #include #include @@ -1047,6 +1048,24 @@ void drop_slab(void) drop_slab_node(nid); } +static int reclaimer_offset(void) +{ + BUILD_BUG_ON(PGSTEAL_DIRECT - PGSTEAL_KSWAPD != + PGDEMOTE_DIRECT - PGDEMOTE_KSWAPD); + BUILD_BUG_ON(PGSTEAL_DIRECT - PGSTEAL_KSWAPD != + PGSCAN_DIRECT - PGSCAN_KSWAPD); + BUILD_BUG_ON(PGSTEAL_KHUGEPAGED - PGSTEAL_KSWAPD != + PGDEMOTE_KHUGEPAGED - PGDEMOTE_KSWAPD); + BUILD_BUG_ON(PGSTEAL_KHUGEPAGED - PGSTEAL_KSWAPD != + PGSCAN_KHUGEPAGED - PGSCAN_KSWAPD); + + if (current_is_kswapd()) + return 0; + if (current_is_khugepaged()) + return PGSTEAL_KHUGEPAGED - PGSTEAL_KSWAPD; + return PGSTEAL_DIRECT - PGSTEAL_KSWAPD; +} + static inline int is_page_cache_freeable(struct folio *folio) { /* @@ -1599,10 +1618,7 @@ static unsigned int demote_folio_list(struct list_head *demote_folios, (unsigned long)&mtc, MIGRATE_ASYNC, MR_DEMOTION, &nr_succeeded); - if (current_is_kswapd()) - __count_vm_events(PGDEMOTE_KSWAPD, nr_succeeded); - else - __count_vm_events(PGDEMOTE_DIRECT, nr_succeeded); + __count_vm_events(PGDEMOTE_KSWAPD + reclaimer_offset(), nr_succeeded); return nr_succeeded; } @@ -2475,7 +2491,7 @@ static unsigned long shrink_inactive_list(unsigned long nr_to_scan, &nr_scanned, sc, lru); __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, nr_taken); - item = current_is_kswapd() ? PGSCAN_KSWAPD : PGSCAN_DIRECT; + item = PGSCAN_KSWAPD + reclaimer_offset(); if (!cgroup_reclaim(sc)) __count_vm_events(item, nr_scanned); __count_memcg_events(lruvec_memcg(lruvec), item, nr_scanned); @@ -2492,7 +2508,7 @@ static unsigned long shrink_inactive_list(unsigned long nr_to_scan, move_folios_to_lru(lruvec, &folio_list); __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken); - item = current_is_kswapd() ? PGSTEAL_KSWAPD : PGSTEAL_DIRECT; + item = PGSTEAL_KSWAPD + reclaimer_offset(); if (!cgroup_reclaim(sc)) __count_vm_events(item, nr_reclaimed); __count_memcg_events(lruvec_memcg(lruvec), item, nr_reclaimed); @@ -4857,7 +4873,7 @@ static int scan_folios(struct lruvec *lruvec, struct scan_control *sc, break; } - item = current_is_kswapd() ? PGSCAN_KSWAPD : PGSCAN_DIRECT; + item = PGSCAN_KSWAPD + reclaimer_offset(); if (!cgroup_reclaim(sc)) { __count_vm_events(item, isolated); __count_vm_events(PGREFILL, sorted); @@ -5015,7 +5031,7 @@ static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swap if (walk && walk->batched) reset_batch_size(lruvec, walk); - item = current_is_kswapd() ? PGSTEAL_KSWAPD : PGSTEAL_DIRECT; + item = PGSTEAL_KSWAPD + reclaimer_offset(); if (!cgroup_reclaim(sc)) __count_vm_events(item, reclaimed); __count_memcg_events(memcg, item, reclaimed); diff --git a/mm/vmstat.c b/mm/vmstat.c index b2371d745e00..1ea6a5ce1c41 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1271,10 +1271,13 @@ const char * const vmstat_text[] = { "pgreuse", "pgsteal_kswapd", "pgsteal_direct", + "pgsteal_khugepaged", "pgdemote_kswapd", "pgdemote_direct", + "pgdemote_khugepaged", "pgscan_kswapd", "pgscan_direct", + "pgscan_khugepaged", "pgscan_direct_throttle", "pgscan_anon", "pgscan_file",