From patchwork Wed Sep 13 16:49:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vern Hao X-Patchwork-Id: 13383549 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CAADFEE01E8 for ; Wed, 13 Sep 2023 16:49:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 17EA36B019E; Wed, 13 Sep 2023 12:49:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 12E936B01A9; Wed, 13 Sep 2023 12:49:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F10B96B01AB; Wed, 13 Sep 2023 12:49:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id DC5846B019E for ; Wed, 13 Sep 2023 12:49:50 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id A24541A0DB9 for ; Wed, 13 Sep 2023 16:49:50 +0000 (UTC) X-FDA: 81232160940.25.D734FBF Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com [209.85.214.173]) by imf05.hostedemail.com (Postfix) with ESMTP id D7E0E100018 for ; Wed, 13 Sep 2023 16:49:48 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=A+dxYD+0; spf=pass (imf05.hostedemail.com: domain of haoxing990@gmail.com designates 209.85.214.173 as permitted sender) smtp.mailfrom=haoxing990@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1694623788; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=08hV17UrGwqSsOU/Pj3BP46JQZzZQz8kR1HYrZEWtOg=; b=cjt+MHpfraaAf4xJRT5+dvdq5cEjcNYs9jpdZyvVW3oq2Yl1vQulHRZtkCS8FvvmKhwsk/ Y8iSGDhCUfCSQn1GH2qjcpvOWqSCas75QHL7zLM5c9uIC0RQg6qzjFMjk4waLzQZIOvIR0 ULsxj5sroksMExT1R3qpTD9g93PZSb0= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1694623788; a=rsa-sha256; cv=none; b=nGJuCVPgged6u7/fsX/zNENnq6Yzf/cgbFwJQXAb8TDExuB6cVB+ywdA+p/e/YINq3yTWA hD4E0TsuiCwM8rRRy2lkfIBtAEDY7dvLvFlStXM4oDfcLKuXFswBrMrXpoKP1MjlKmPcXF VVF5U5/uhWaPfcCpOyUvfuk2hIDh+P4= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=A+dxYD+0; spf=pass (imf05.hostedemail.com: domain of haoxing990@gmail.com designates 209.85.214.173 as permitted sender) smtp.mailfrom=haoxing990@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-pl1-f173.google.com with SMTP id d9443c01a7336-1c364fb8a4cso64082885ad.1 for ; Wed, 13 Sep 2023 09:49:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1694623787; x=1695228587; darn=kvack.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=08hV17UrGwqSsOU/Pj3BP46JQZzZQz8kR1HYrZEWtOg=; b=A+dxYD+0kozxbeVl20cOaRsm5n6lpqHNooNABhSC9NZUO511Iq30q1u+jPPNahT1Ul bOuGwCrJzHdZ7YcTzBpQgIt4+BxGruHCcbq0rEVuKGUjZu0CrNDm8oikODa2SaeEVFFF bx/vBoiuHiP3TPSlMeNW3dRdPH9w6oVwrn+yWmuKI9Rp36rnS0BDXUWc1qJHBpwEN77M jYcky+MKOlkvCWTomeY5ZObT6lylD/YBGdmD8vzqB4c+SBA99GMwVKbmVP4eS24q7V5Y a6K+7ui9TN3dohYG10U537EdE4hCbBNRNf48S7YFe3ZthRZq7ZNJvOuL8v0q97zPrg+5 b/dQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694623787; x=1695228587; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=08hV17UrGwqSsOU/Pj3BP46JQZzZQz8kR1HYrZEWtOg=; b=dWNfvyoISxzyjvPZQzJK49+mjed9vxRgxPYKb25v2q2ucNoHIG7cpel0h8s7Kj0u+0 JcYCVmmVUAiTQdnzhmOP+FyHH3LfJOwuGMHiJq/wqVW3B4jpCKqAfRx7372GArBnVh0O 5JPnJliyumm4IQWLwpRQtckocYC6PcT7ClBJwZB0YqtTcoKlvR5eZ7UWhsTL9wizggNw /Iq5+8VEYGcLgZsCoiP0wH10HXDrQU25jSLp32mm/n/AxjTGC1L7SEtgWhr0OgUIWRvw KsH4y3knL97nrrYGuiAYF/pDXXJEDRWwkyIyV5fFtJOcJGyFG41gbQjU9bce9cYlPTkW rfkA== X-Gm-Message-State: AOJu0Yx6pe1Uver0JCUP+oit2ZXLL2WuUNTvkI0fMQLEe5y/k6RB8+uc fTx7/i3ncOQ/uw5fKQk+Zd0= X-Google-Smtp-Source: AGHT+IFk2+aD7lidOi2/i3oxl3cOgOhU3i2C7yGs/iI4FsnaTfNcKp01QUXLj8ZiS9xV1h9E60++zw== X-Received: by 2002:a17:902:dac9:b0:1b8:6987:de84 with SMTP id q9-20020a170902dac900b001b86987de84mr4035325plx.48.1694623787574; Wed, 13 Sep 2023 09:49:47 -0700 (PDT) Received: from VERNHAO-MC1.tencent.com ([111.197.253.118]) by smtp.gmail.com with ESMTPSA id c1-20020a170902724100b001b89536974bsm1707520pll.202.2023.09.13.09.49.43 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Wed, 13 Sep 2023 09:49:47 -0700 (PDT) From: Vern Hao X-Google-Original-From: Vern Hao To: hannes@cmpxchg.org Cc: mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, akpm@linux-foundation.org, cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, haoxing990@gmail.com, Xin Hao Subject: [PATCH v3] mm: memcg: add THP swap out info for anonymous reclaim Date: Thu, 14 Sep 2023 00:49:37 +0800 Message-ID: <20230913164938.16918-1-vernhao@tencent.com> X-Mailer: git-send-email 2.42.0 MIME-Version: 1.0 X-Rspamd-Queue-Id: D7E0E100018 X-Rspam-User: X-Stat-Signature: apo65kdskwmkkmad8hijtwnz3tskodfx X-Rspamd-Server: rspam03 X-HE-Tag: 1694623788-663471 X-HE-Meta: U2FsdGVkX1+B27vrF3beBvH9fs1s2qTF/bAjqlr6kbLQIlKi5xWh01lLU0ObiBeYle0RpwBbYBBINib5z3FV3bx6nqByE+bSs7p6r2uqU8lT5Pd8WM+K4/rBR3GP8CmIssUn7C8At7uxqYNW8NA6ewaypgb3vyD0jcSQqGQd02UVa02djTWPyU/TsG1HEUDBoRHCwk87VCsn5nrc7F9qBC3AZ25P8kdp5aubRYtPAnxmwrbYlDKGRRsEThUWQOqsoLNc512c6B7RuM9r0Y7y9WRFZPviycVNcuxu5rE8Zk9vJwnEUcIs8KQZbgvkErkknAxxHEVvxw9XSOcEgh1rcAJYIwKuohfU8AmNLmm//lE5C7x66uyqc5GoLifrXYdCORM4x9Pt0ZZl1V4t29mEKIWsgZ6j7MGpSDtVlRB+3a8J01ydRazblsWRY21tJh6aTWRoSbJlK+uqGqghkMsl5CQ51mdvNuCySh7DcbAqgNZifkNeufL69PMfGJN+E4Q4xbK2oHqQnjqzT7fvN9vi8XgiFPMMzl7wu+zESRcF3+r3mR7cHcgm/6YBL+f+UXTa1BeLUb+e9o2v7ohypbp8x6AwXjX2kLJvtIVFJxAG8aP4tmiMiDWkUlNa02uRxWoKMa6+yNlPANACKWOuLoKcO19bZH8isXzYfqlguZGVrEpuv3LZCDxxRrcSFgLG5zQi2bWmIU76ntESQfpZfCF5QZK5ILXTzUpD2AntUsNCTc1bo4S+0DQck79PlU0Wlv10yWf0wFmdGhEX8fXeQXoacnAXL/uDeg6X4SKK/bNiVfQYqY4NgvYMOzpS+6zPoeL/eJhupgoC9yqi3xw4kaVK55dnjK8Y/0rO1UVPBYdAi80Z/3d7H3LoF4pmDhjqezVmDu6vRMk5QBeL21kG9D6w0oi5G4qAZFG/cT+JPjVkoG/QsQaxnmL6m2AyGnknTvY7ZpNzes0RRtg108sVSn2 5kNOPk2s 7fFxdYUXXKV+C5m3fo8nFkMSukOTK1gzt8ybs07yKNKPH5uR1YmqJ2hcngkvBVgOeI3ytRJLUIngOzcFwCa808DRqEiWDz+p0bywavuorRUu3ReTi2dRvcKK1PoZe75TdiL4KLQItEEA8MA8ZRwg3j5VBV8VEblrKq8AyN83ag6nPR/X52P5VUky1ubjS5GGvg8CRUZ0IpvEW1PLdatTZsVKBTxjLmhAfcLr2C8GCP8t8dsChpVMj+Nnk7pMJ5NXyMfsiQJb1czCeNy4f3S0Civhg0jxI7nqYvOo7a1zBxbLKzyoQowtLYWEpt+xX91670knfUayLuKZPkot2mcCKET98xAJHlcgxftmAsF7HLQuGuWe+UxBkcDenBlkvOzYjF7tfu1aBCu8skzwdxfrM7ZXX2lDfvF8jUBMPt4lF/X0mD5mT0cZhL5s4MSL2gdZFjY+sn6mrRs2RJuHCbZKpzSv/tWMrKQHaOJbfo9Z9JMfRM3UvGL1Pyj498OC+AGvLq/ABxqIa+muQgCijXeIp8t1Uqg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Xin Hao At present, we support per-memcg reclaim strategy, however we do not know the number of transparent huge pages being reclaimed, as we know the transparent huge pages need to be splited before reclaim them, and they will bring some performance bottleneck effect. for example, when two memcg (A & B) are doing reclaim for anonymous pages at same time, and 'A' memcg is reclaiming a large number of transparent huge pages, we can better analyze that the performance bottleneck will be caused by 'A' memcg. therefore, in order to better analyze such problems, there add THP swap out info for per-memcg. Suggested-by: Johannes Weiner Signed-off-by: Xin Hao Acked-by: Johannes Weiner --- v2 -> v3 - Do little fix as Johannes Weiner suggestion. - add 'thp_swpout' and 'thp_swpout_fallback' to Documentation/admin-guide/cgroup-v2.rst v1 -> v2 - Do some fix as Johannes Weiner suggestion. v2: https://lore.kernel.org/linux-mm/20230912021727.61601-1-vernhao@tencent.com/ v1: https://lore.kernel.org/linux-mm/20230911160824.GB103342@cmpxchg.org/T/ Documentation/admin-guide/cgroup-v2.rst | 9 +++++++++ mm/memcontrol.c | 2 ++ mm/page_io.c | 8 ++++---- mm/vmscan.c | 1 + 4 files changed, 16 insertions(+), 4 deletions(-) diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst index b26b5274eaaf..622a7f28db1f 100644 --- a/Documentation/admin-guide/cgroup-v2.rst +++ b/Documentation/admin-guide/cgroup-v2.rst @@ -1532,6 +1532,15 @@ PAGE_SIZE multiple when read back. collapsing an existing range of pages. This counter is not present when CONFIG_TRANSPARENT_HUGEPAGE is not set. + thp_swpout (npn) + Number of transparent hugepages which are swapout in one piece + without splitting. + + thp_swpout_fallback (npn) + Number of transparent hugepages which were split before swapout. + Usually because failed to allocate some continuous swap space + for the huge page. + memory.numa_stat A read-only nested-keyed file which exists on non-root cgroups. diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 811491b99e3e..9f84b3f7b469 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -704,6 +704,8 @@ static const unsigned int memcg_vm_event_stat[] = { #ifdef CONFIG_TRANSPARENT_HUGEPAGE THP_FAULT_ALLOC, THP_COLLAPSE_ALLOC, + THP_SWPOUT, + THP_SWPOUT_FALLBACK, #endif }; diff --git a/mm/page_io.c b/mm/page_io.c index fe4c21af23f2..7cf358158cf1 100644 --- a/mm/page_io.c +++ b/mm/page_io.c @@ -208,8 +208,10 @@ int swap_writepage(struct page *page, struct writeback_control *wbc) static inline void count_swpout_vm_event(struct folio *folio) { #ifdef CONFIG_TRANSPARENT_HUGEPAGE - if (unlikely(folio_test_pmd_mappable(folio))) + if (unlikely(folio_test_pmd_mappable(folio))) { + count_memcg_folio_events(folio, THP_SWPOUT, 1); count_vm_event(THP_SWPOUT); + } #endif count_vm_events(PSWPOUT, folio_nr_pages(folio)); } @@ -278,9 +280,6 @@ static void sio_write_complete(struct kiocb *iocb, long ret) set_page_dirty(page); ClearPageReclaim(page); } - } else { - for (p = 0; p < sio->pages; p++) - count_swpout_vm_event(page_folio(sio->bvec[p].bv_page)); } for (p = 0; p < sio->pages; p++) @@ -296,6 +295,7 @@ static void swap_writepage_fs(struct page *page, struct writeback_control *wbc) struct file *swap_file = sis->swap_file; loff_t pos = page_file_offset(page); + count_swpout_vm_event(page_folio(sio->bvec[p].bv_page)); set_page_writeback(page); unlock_page(page); if (wbc->swap_plug) diff --git a/mm/vmscan.c b/mm/vmscan.c index 00b24c3b2b04..661615fa709b 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1227,6 +1227,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, folio_list)) goto activate_locked; #ifdef CONFIG_TRANSPARENT_HUGEPAGE + count_memcg_folio_events(folio, THP_SWPOUT_FALLBACK, 1); count_vm_event(THP_SWPOUT_FALLBACK); #endif if (!add_to_swap(folio))