From patchwork Mon Apr 20 22:11:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Weiner X-Patchwork-Id: 11499955 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A619914DD for ; Mon, 20 Apr 2020 22:12:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6546522242 for ; Mon, 20 Apr 2020 22:12:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=cmpxchg-org.20150623.gappssmtp.com header.i=@cmpxchg-org.20150623.gappssmtp.com header.b="0hvnCh2s" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6546522242 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=cmpxchg.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E3D5D8E0010; Mon, 20 Apr 2020 18:12:02 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id DF0FD8E0003; Mon, 20 Apr 2020 18:12:02 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CDCD38E0010; Mon, 20 Apr 2020 18:12:02 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0139.hostedemail.com [216.40.44.139]) by kanga.kvack.org (Postfix) with ESMTP id B49738E0003 for ; Mon, 20 Apr 2020 18:12:02 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 66D7D585B for ; Mon, 20 Apr 2020 22:12:02 +0000 (UTC) X-FDA: 76729632084.05.power88_69111722f5654 X-Spam-Summary: 10,1,0,e6e25d375bc6709a,d41d8cd98f00b204,hannes@cmpxchg.org,,RULES_HIT:2:41:69:355:379:541:800:960:968:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1730:1747:1777:1792:2198:2199:2393:2559:2562:2898:3138:3139:3140:3141:3142:3308:3355:3865:3866:3867:3868:3870:3871:3872:4049:4120:4250:4321:4605:5007:6119:6261:6653:7903:8603:8957:9592:10007:11026:11232:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12683:12895:13894:14096:14110:14394:21080:21444:21450:21451:21627:21740:21966:21990:30045:30054:30056:30070,0,RBL:209.85.219.65:@cmpxchg.org:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:1:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: power88_69111722f5654 X-Filterd-Recvd-Size: 9895 Received: from mail-qv1-f65.google.com (mail-qv1-f65.google.com [209.85.219.65]) by imf01.hostedemail.com (Postfix) with ESMTP for ; Mon, 20 Apr 2020 22:12:01 +0000 (UTC) Received: by mail-qv1-f65.google.com with SMTP id v38so5600755qvf.6 for ; Mon, 20 Apr 2020 15:12:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=h4sszY9+iyxzegLwaFGW6mKXEYImb/Xgfmc+Qp7VzPA=; b=0hvnCh2sHCpHj/iDv6YbtiQbGPzCK1g7DZo3JZPtIv5PBUke5TV+OzkugiA0B6Ab2E iiEfLG/uLRttNpn4drUFBZJYg0zlN3U+4HX9Y7+cO2Kd1afetjuCulns45h3NPTHrOsw 7bf0ro2xgbgLprmUYCRv7fiDrbEoSEEgePoF9Gg1vggDWyMHoSeBgpDzY8vqoFBVhBQX XvfTy6rEDQO8FvIE40ShDKvWJl/XG2BHybuXNya9cXiBuFcf8pd1L/ib8L8xHBMZ2SNh Enpy0DjL1m9rX/AzESUtzoZihJUPJZNf0J0xlWu3m2xW7GtUPzLJWO3mvaskuzfQrhqJ cdcg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=h4sszY9+iyxzegLwaFGW6mKXEYImb/Xgfmc+Qp7VzPA=; b=AI8PYGrsLbnzGa3RvroFikqYv5yTsBnLUTNOE5aWBJy1z2laDfEJlamq6SEWN/ZgQF qEVjSKTgpQNMeSrjiT3rQo2hPZXBu1P+En+nlHcYSErNHx3qFdUpmpFVom5QFSovhqZj 2KZ2Tip3COjGxhhZWWYTeMJ/bngAAKBLYlTfcj8KMbCmLGcJS+f4osm2aokp8TnaaBUM PLvYWyathUeIhhd7+myG2XGleZzHqIr955AQImKQyptyDd+uQYutlcRXia5Kn6JzK9k2 odQagtyWQ7Ew7I+w+WICeGRYutqty2Ihp3FyXna61aee3YQX6nSNgjx5QrYqM1YbOtZV yUNg== X-Gm-Message-State: AGi0PuZdVSBdgjnPsUN7uMJJh5uwwrVgiMlb7m1Bb94ROrv5a3Iy1554 RupeLFdF/unmDmXYPjuBK4SvWw== X-Google-Smtp-Source: APiQypIb39QHlOroq6GnsBv28Rzc1EvSZqe41d3RgaG4stptyoiCkcpNK7HeaC46V8y0Oh33t83QCg== X-Received: by 2002:a05:6214:188a:: with SMTP id cx10mr15365785qvb.119.1587420721001; Mon, 20 Apr 2020 15:12:01 -0700 (PDT) Received: from localhost ([2620:10d:c091:480::1:e6b6]) by smtp.gmail.com with ESMTPSA id u126sm602093qkh.66.2020.04.20.15.12.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Apr 2020 15:12:00 -0700 (PDT) From: Johannes Weiner To: Joonsoo Kim , Alex Shi Cc: Shakeel Butt , Hugh Dickins , Michal Hocko , "Kirill A. Shutemov" , Roman Gushchin , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: [PATCH 11/18] mm: memcontrol: switch to native NR_ANON_THPS counter Date: Mon, 20 Apr 2020 18:11:19 -0400 Message-Id: <20200420221126.341272-12-hannes@cmpxchg.org> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200420221126.341272-1-hannes@cmpxchg.org> References: <20200420221126.341272-1-hannes@cmpxchg.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: With rmap memcg locking already in place for NR_ANON_MAPPED, it's just a small step to remove the MEMCG_RSS_HUGE wart and switch memcg to the native NR_ANON_THPS accounting sites. Signed-off-by: Johannes Weiner Reviewed-by: Joonsoo Kim --- include/linux/memcontrol.h | 3 +-- mm/huge_memory.c | 4 +++- mm/memcontrol.c | 39 ++++++++++++++++---------------------- mm/rmap.c | 6 +++--- 4 files changed, 23 insertions(+), 29 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index bfb1d961e346..9ac8122ec1cd 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -29,8 +29,7 @@ struct kmem_cache; /* Cgroup-specific page state, on top of universal node page state */ enum memcg_stat_item { - MEMCG_RSS_HUGE = NR_VM_NODE_STAT_ITEMS, - MEMCG_SWAP, + MEMCG_SWAP = NR_VM_NODE_STAT_ITEMS, MEMCG_SOCK, /* XXX: why are these zone and not node counters? */ MEMCG_KERNEL_STACK_KB, diff --git a/mm/huge_memory.c b/mm/huge_memory.c index e9355a463e74..da6c413a75a5 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2359,15 +2359,17 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, atomic_inc(&page[i]._mapcount); } + lock_page_memcg(page); if (atomic_add_negative(-1, compound_mapcount_ptr(page))) { /* Last compound_mapcount is gone. */ - __dec_node_page_state(page, NR_ANON_THPS); + __dec_lruvec_page_state(page, NR_ANON_THPS); if (TestClearPageDoubleMap(page)) { /* No need in mapcount reference anymore */ for (i = 0; i < HPAGE_PMD_NR; i++) atomic_dec(&page[i]._mapcount); } } + unlock_page_memcg(page); smp_wmb(); /* make pte visible before pmd */ pmd_populate(mm, pmd, pgtable); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index c87178d6219f..7845a87b94d5 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -836,11 +836,6 @@ static void mem_cgroup_charge_statistics(struct mem_cgroup *memcg, struct page *page, int nr_pages) { - if (abs(nr_pages) > 1) { - VM_BUG_ON_PAGE(!PageTransHuge(page), page); - __mod_memcg_state(memcg, MEMCG_RSS_HUGE, nr_pages); - } - /* pagein of a big page is an event. So, ignore page size */ if (nr_pages > 0) __count_memcg_events(memcg, PGPGIN, 1); @@ -1406,15 +1401,9 @@ static char *memory_stat_format(struct mem_cgroup *memcg) (u64)memcg_page_state(memcg, NR_WRITEBACK) * PAGE_SIZE); - /* - * TODO: We should eventually replace our own MEMCG_RSS_HUGE counter - * with the NR_ANON_THP vm counter, but right now it's a pain in the - * arse because it requires migrating the work out of rmap to a place - * where the page->mem_cgroup is set up and stable. - */ seq_buf_printf(&s, "anon_thp %llu\n", - (u64)memcg_page_state(memcg, MEMCG_RSS_HUGE) * - PAGE_SIZE); + (u64)memcg_page_state(memcg, NR_ANON_THPS) * + HPAGE_PMD_NR * PAGE_SIZE); for (i = 0; i < NR_LRU_LISTS; i++) seq_buf_printf(&s, "%s %llu\n", lru_list_name(i), @@ -3006,8 +2995,6 @@ void mem_cgroup_split_huge_fixup(struct page *head) for (i = 1; i < HPAGE_PMD_NR; i++) head[i].mem_cgroup = head->mem_cgroup; - - __mod_memcg_state(head->mem_cgroup, MEMCG_RSS_HUGE, -HPAGE_PMD_NR); } #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ @@ -3762,7 +3749,7 @@ static int memcg_numa_stat_show(struct seq_file *m, void *v) static const unsigned int memcg1_stats[] = { NR_FILE_PAGES, NR_ANON_MAPPED, - MEMCG_RSS_HUGE, + NR_ANON_THPS, NR_SHMEM, NR_FILE_MAPPED, NR_FILE_DIRTY, @@ -3799,11 +3786,14 @@ static int memcg_stat_show(struct seq_file *m, void *v) BUILD_BUG_ON(ARRAY_SIZE(memcg1_stat_names) != ARRAY_SIZE(memcg1_stats)); for (i = 0; i < ARRAY_SIZE(memcg1_stats); i++) { + unsigned long nr; + if (memcg1_stats[i] == MEMCG_SWAP && !do_memsw_account()) continue; - seq_printf(m, "%s %lu\n", memcg1_stat_names[i], - memcg_page_state_local(memcg, memcg1_stats[i]) * - PAGE_SIZE); + nr = memcg_page_state_local(memcg, memcg1_stats[i]); + if (memcg1_stats[i] == NR_ANON_THPS) + nr *= HPAGE_PMD_NR; + seq_printf(m, "%s %lu\n", memcg1_stat_names[i], nr * PAGE_SIZE); } for (i = 0; i < ARRAY_SIZE(memcg1_events); i++) @@ -5392,6 +5382,13 @@ static int mem_cgroup_move_account(struct page *page, if (page_mapped(page)) { __mod_lruvec_state(from_vec, NR_ANON_MAPPED, -nr_pages); __mod_lruvec_state(to_vec, NR_ANON_MAPPED, nr_pages); + if (PageTransHuge(page)) { + __mod_lruvec_state(from_vec, NR_ANON_THPS, + -nr_pages); + __mod_lruvec_state(to_vec, NR_ANON_THPS, + nr_pages); + } + } } else { __mod_lruvec_state(from_vec, NR_FILE_PAGES, -nr_pages); @@ -6611,7 +6608,6 @@ struct uncharge_gather { unsigned long nr_pages; unsigned long pgpgout; unsigned long nr_kmem; - unsigned long nr_huge; struct page *dummy_page; }; @@ -6634,7 +6630,6 @@ static void uncharge_batch(const struct uncharge_gather *ug) } local_irq_save(flags); - __mod_memcg_state(ug->memcg, MEMCG_RSS_HUGE, -ug->nr_huge); __count_memcg_events(ug->memcg, PGPGOUT, ug->pgpgout); __this_cpu_add(ug->memcg->vmstats_percpu->nr_page_events, ug->nr_pages); memcg_check_events(ug->memcg, ug->dummy_page); @@ -6673,8 +6668,6 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug) ug->nr_pages += nr_pages; if (!PageKmemcg(page)) { - if (PageTransHuge(page)) - ug->nr_huge += nr_pages; ug->pgpgout++; } else { ug->nr_kmem += nr_pages; diff --git a/mm/rmap.c b/mm/rmap.c index 150513d31efa..ad4a0fdcc94c 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1138,7 +1138,7 @@ void do_page_add_anon_rmap(struct page *page, * disabled. */ if (compound) - __inc_node_page_state(page, NR_ANON_THPS); + __inc_lruvec_page_state(page, NR_ANON_THPS); __mod_lruvec_page_state(page, NR_ANON_MAPPED, nr); } @@ -1180,7 +1180,7 @@ void page_add_new_anon_rmap(struct page *page, if (hpage_pincount_available(page)) atomic_set(compound_pincount_ptr(page), 0); - __inc_node_page_state(page, NR_ANON_THPS); + __inc_lruvec_page_state(page, NR_ANON_THPS); } else { /* Anon THP always mapped first with PMD */ VM_BUG_ON_PAGE(PageTransCompound(page), page); @@ -1286,7 +1286,7 @@ static void page_remove_anon_compound_rmap(struct page *page) if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) return; - __dec_node_page_state(page, NR_ANON_THPS); + __dec_lruvec_page_state(page, NR_ANON_THPS); if (TestClearPageDoubleMap(page)) { /*