From patchwork Fri May 8 18:30:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Weiner X-Patchwork-Id: 11537383 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6B55515AB for ; Fri, 8 May 2020 18:32:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2F2722192A for ; Fri, 8 May 2020 18:32:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=cmpxchg-org.20150623.gappssmtp.com header.i=@cmpxchg-org.20150623.gappssmtp.com header.b="Ok41Lo1v" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2F2722192A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=cmpxchg.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 06EAB900007; Fri, 8 May 2020 14:32:24 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 04A26900005; Fri, 8 May 2020 14:32:23 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D8E13900007; Fri, 8 May 2020 14:32:23 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0145.hostedemail.com [216.40.44.145]) by kanga.kvack.org (Postfix) with ESMTP id BA93F900005 for ; Fri, 8 May 2020 14:32:23 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 7AA9D2C8F for ; Fri, 8 May 2020 18:32:23 +0000 (UTC) X-FDA: 76794396966.15.wire76_6c25a964a7128 X-Spam-Summary: 2,0,0,fbb59fc83394c821,d41d8cd98f00b204,hannes@cmpxchg.org,,RULES_HIT:41:69:355:379:541:800:960:968:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1543:1711:1730:1747:1777:1792:2393:2559:2562:2693:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:4117:4321:5007:6261:6653:6742:8957:10004:11026:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12895:13161:13229:13255:13894:14096:14181:14394:14721:21060:21080:21444:21450:21451:21627:30012:30034:30054,0,RBL:209.85.160.193:@cmpxchg.org:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:29,LUA_SUMMARY:none X-HE-Tag: wire76_6c25a964a7128 X-Filterd-Recvd-Size: 6449 Received: from mail-qt1-f193.google.com (mail-qt1-f193.google.com [209.85.160.193]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Fri, 8 May 2020 18:32:22 +0000 (UTC) Received: by mail-qt1-f193.google.com with SMTP id x12so2131410qts.9 for ; Fri, 08 May 2020 11:32:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=os71RgO7UYZW6sWXEwj0lDMyIJjz3G/urqsx0D5LgDI=; b=Ok41Lo1vf1PMKfG8DI0O4GLzAqQtpVb1zNf1EPZ4My8rTLN7d9U322b+HhpR56d+zr qgw/tjjicxzAND+jaT9qVeoLd36ipg11elnWg1Y7bYPF/T6YCZATPbJafo69PY0BO/J3 f8aKqmu/arzV5mpttkCYZsi4SOXhka0OG9RtKFYGeBnYCLhWW70fKPsQST7VvJY7tiBu mJkZ1G5k5ovJfOTNzY5LP819R3F0edK6ciYpm6/r1d0sHdZHrBt0W7VbrHppDC88JPH5 ygDS6M79il+1OV54bUyD4agmTHx0tqczjshn0Sbt2wFolrxpQQpTGDv4hvCuGMBxwdJA CWcQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=os71RgO7UYZW6sWXEwj0lDMyIJjz3G/urqsx0D5LgDI=; b=P6hSxeAZqLiEi5qD5kvGAhGEcF/HNaiea9EBMon5XHLD3s5S+UmIrXubyz5asRdiQs 1+RoTEW2Zf5tXJlu3WR5hEufLZZVCGOplRVdhoznbenu8Q4IlY3WffaX2h0wvgtkMdHD qbXpWmVmEgOgMlgPq4g42tbzdBKErWJJKx0F+m7GRH7+TXeBUhvAV0ByGOgHVuz34NJy zhdFDnEtXckbB4Cqy5O6i6O3egQ8EmniJYspF1Df65nXKun1RKuexhidtNKiPbCnUZ9w 5uB8dbjHAkLpXJTQL9kDY6KdsW/ugUFzGE0cJ3NL4hxbfspEN7tzGIL66rMeRSi9CPpw qhGg== X-Gm-Message-State: AGi0PuaNuW+CXvUIiC1llvLNrlicQSqKQwBs5z1Pe0B/Ecupvm3eC7RU 2Jn3wDpvc8E0K80gvSsxKCFddw== X-Google-Smtp-Source: APiQypLLbfqSzYKsedCrMYddgiO1/unFT31fQvO3IxHuqOtWBpejHnB2p+HWn96iuTaw56MC3MvTsw== X-Received: by 2002:ac8:7496:: with SMTP id v22mr4449251qtq.348.1588962742353; Fri, 08 May 2020 11:32:22 -0700 (PDT) Received: from localhost ([2620:10d:c091:480::1:2627]) by smtp.gmail.com with ESMTPSA id w27sm2114893qtc.18.2020.05.08.11.32.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 08 May 2020 11:32:21 -0700 (PDT) From: Johannes Weiner To: Andrew Morton Cc: Alex Shi , Joonsoo Kim , Shakeel Butt , Hugh Dickins , Michal Hocko , "Kirill A. Shutemov" , Roman Gushchin , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: [PATCH 06/19] mm: memcontrol: prepare uncharging for removal of private page type counters Date: Fri, 8 May 2020 14:30:53 -0400 Message-Id: <20200508183105.225460-7-hannes@cmpxchg.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200508183105.225460-1-hannes@cmpxchg.org> References: <20200508183105.225460-1-hannes@cmpxchg.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The uncharge batching code adds up the anon, file, kmem counts to determine the total number of pages to uncharge and references to drop. But the next patches will remove the anon and file counters. Maintain an aggregate nr_pages in the uncharge_gather struct. Signed-off-by: Johannes Weiner Reviewed-by: Alex Shi Reviewed-by: Joonsoo Kim --- mm/memcontrol.c | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 1d45a09b334f..a5efdad77be4 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6607,6 +6607,7 @@ int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask, struct uncharge_gather { struct mem_cgroup *memcg; + unsigned long nr_pages; unsigned long pgpgout; unsigned long nr_anon; unsigned long nr_file; @@ -6623,13 +6624,12 @@ static inline void uncharge_gather_clear(struct uncharge_gather *ug) static void uncharge_batch(const struct uncharge_gather *ug) { - unsigned long nr_pages = ug->nr_anon + ug->nr_file + ug->nr_kmem; unsigned long flags; if (!mem_cgroup_is_root(ug->memcg)) { - page_counter_uncharge(&ug->memcg->memory, nr_pages); + page_counter_uncharge(&ug->memcg->memory, ug->nr_pages); if (do_memsw_account()) - page_counter_uncharge(&ug->memcg->memsw, nr_pages); + page_counter_uncharge(&ug->memcg->memsw, ug->nr_pages); if (!cgroup_subsys_on_dfl(memory_cgrp_subsys) && ug->nr_kmem) page_counter_uncharge(&ug->memcg->kmem, ug->nr_kmem); memcg_oom_recover(ug->memcg); @@ -6641,16 +6641,18 @@ static void uncharge_batch(const struct uncharge_gather *ug) __mod_memcg_state(ug->memcg, MEMCG_RSS_HUGE, -ug->nr_huge); __mod_memcg_state(ug->memcg, NR_SHMEM, -ug->nr_shmem); __count_memcg_events(ug->memcg, PGPGOUT, ug->pgpgout); - __this_cpu_add(ug->memcg->vmstats_percpu->nr_page_events, nr_pages); + __this_cpu_add(ug->memcg->vmstats_percpu->nr_page_events, ug->nr_pages); memcg_check_events(ug->memcg, ug->dummy_page); local_irq_restore(flags); if (!mem_cgroup_is_root(ug->memcg)) - css_put_many(&ug->memcg->css, nr_pages); + css_put_many(&ug->memcg->css, ug->nr_pages); } static void uncharge_page(struct page *page, struct uncharge_gather *ug) { + unsigned long nr_pages; + VM_BUG_ON_PAGE(PageLRU(page), page); if (!page->mem_cgroup) @@ -6670,13 +6672,12 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug) ug->memcg = page->mem_cgroup; } - if (!PageKmemcg(page)) { - unsigned int nr_pages = 1; + nr_pages = compound_nr(page); + ug->nr_pages += nr_pages; - if (PageTransHuge(page)) { - nr_pages = compound_nr(page); + if (!PageKmemcg(page)) { + if (PageTransHuge(page)) ug->nr_huge += nr_pages; - } if (PageAnon(page)) ug->nr_anon += nr_pages; else { @@ -6686,7 +6687,7 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug) } ug->pgpgout++; } else { - ug->nr_kmem += compound_nr(page); + ug->nr_kmem += nr_pages; __ClearPageKmemcg(page); }