From patchwork Mon Apr 20 22:11:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Weiner X-Patchwork-Id: 11499945 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A44ED6CA for ; Mon, 20 Apr 2020 22:11:56 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 65EE420857 for ; Mon, 20 Apr 2020 22:11:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=cmpxchg-org.20150623.gappssmtp.com header.i=@cmpxchg-org.20150623.gappssmtp.com header.b="0WaKHKwQ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 65EE420857 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=cmpxchg.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D53E58E000B; Mon, 20 Apr 2020 18:11:52 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id CDC1E8E0003; Mon, 20 Apr 2020 18:11:52 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BF34C8E000B; Mon, 20 Apr 2020 18:11:52 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0203.hostedemail.com [216.40.44.203]) by kanga.kvack.org (Postfix) with ESMTP id A80298E0003 for ; Mon, 20 Apr 2020 18:11:52 -0400 (EDT) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 6B3A1180AD806 for ; Mon, 20 Apr 2020 22:11:52 +0000 (UTC) X-FDA: 76729631664.10.sleep23_679fd871b0059 X-Spam-Summary: 2,0,0,b4b2ac9d6c8d1a33,d41d8cd98f00b204,hannes@cmpxchg.org,,RULES_HIT:41:69:355:379:541:800:960:968:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1543:1711:1730:1747:1777:1792:2393:2559:2562:2693:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:4117:4321:5007:6261:6653:8957:10004:11026:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12895:13161:13229:13255:13894:14096:14181:14394:14721:21080:21444:21450:21451:21627:30012:30034:30054,0,RBL:209.85.160.194:@cmpxchg.org:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: sleep23_679fd871b0059 X-Filterd-Recvd-Size: 6374 Received: from mail-qt1-f194.google.com (mail-qt1-f194.google.com [209.85.160.194]) by imf39.hostedemail.com (Postfix) with ESMTP for ; Mon, 20 Apr 2020 22:11:51 +0000 (UTC) Received: by mail-qt1-f194.google.com with SMTP id i68so4317995qtb.5 for ; Mon, 20 Apr 2020 15:11:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=dG5KZR2clXoJPnKFOV5SEHBLf26fT+nPwiogFzO+ldA=; b=0WaKHKwQVjmbBjFsTuOWUS+wcC0JLwQFHr5wj3CTY2kjqfMVSf29eOS5hCwn+W8cTO RaS/Clu0E2zgqMx7TxfVx4wFvJ3twiM9vkH/3BQPXiC90YhEkWGOVobolxEGs5oCrnpm Yp17gTJI8CxdmtYcRkYIT+iDXdwlx25mrcRBgHjgRDdl6xKZxnGrNKXEqeE1NJTJoKyg K8fP1gfYcCtmkBZ1dvSOhtMRAZi+CWDolI+0MVDYEZ4BA3t9DX5AuqMNoAVDp3CA6LR9 /qheOYPf6Aw5mrvqaivEHFMNLvkGY6Ecycko2QIhT3fzUF8EUfAAmjHiH3C46VyuACvZ sBlA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dG5KZR2clXoJPnKFOV5SEHBLf26fT+nPwiogFzO+ldA=; b=JdsaMHDBksP0JLWS0XL65Ah9yle2nzlOxGo0F7MvVVx5ViFCUikN9UKyATkRmNQ4ic GqSmOtrUW0HeZUHrTsL0Hf1XQGDmpKnTsxlse36WsxnZry2CRc3nNnfmzKxH/j4peW6x ffQ3ZfHsjUyOv5QdMXrMCYdQ46cFoBFMhvbAObMyH60S/gx4cMkEt91tijv5QlI/gF7/ wsodlY4Qc5JZ9Kn8STuOt9JuIHM0Jy2WbpFJAiqpsYnRqzahAL45dE9yY/REihNBp0l4 mBGW5mRJlbdYsI1ofck+UolmdPYyqLbQig5e1F47ANj3H2XRrSq5VbrQ6RYOsYTM/R6l ea/w== X-Gm-Message-State: AGi0PuZ1kwhk/fJ0rGqb6PLfaXWLSQzdW7jeuscthyM5KPqqjydCqygp 1Lpo3e3xeIgPMThKnmKb0CyRYw== X-Google-Smtp-Source: APiQypJPGyckwql6ayKFpHkiR0AiN8Ga+kbzm4GemdFcbcya/RUo//MgE7P+c7/v8r78RsgoU/piag== X-Received: by 2002:ac8:550a:: with SMTP id j10mr18099034qtq.193.1587420711288; Mon, 20 Apr 2020 15:11:51 -0700 (PDT) Received: from localhost ([2620:10d:c091:480::1:e6b6]) by smtp.gmail.com with ESMTPSA id k43sm484957qtk.67.2020.04.20.15.11.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Apr 2020 15:11:50 -0700 (PDT) From: Johannes Weiner To: Joonsoo Kim , Alex Shi Cc: Shakeel Butt , Hugh Dickins , Michal Hocko , "Kirill A. Shutemov" , Roman Gushchin , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: [PATCH 06/18] mm: memcontrol: prepare uncharging for removal of private page type counters Date: Mon, 20 Apr 2020 18:11:14 -0400 Message-Id: <20200420221126.341272-7-hannes@cmpxchg.org> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200420221126.341272-1-hannes@cmpxchg.org> References: <20200420221126.341272-1-hannes@cmpxchg.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The uncharge batching code adds up the anon, file, kmem counts to determine the total number of pages to uncharge and references to drop. But the next patches will remove the anon and file counters. Maintain an aggregate nr_pages in the uncharge_gather struct. Signed-off-by: Johannes Weiner Reviewed-by: Alex Shi Reviewed-by: Joonsoo Kim --- mm/memcontrol.c | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index b38c0a672d26..e3e8913a5b28 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6606,6 +6606,7 @@ int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask, struct uncharge_gather { struct mem_cgroup *memcg; + unsigned long nr_pages; unsigned long pgpgout; unsigned long nr_anon; unsigned long nr_file; @@ -6622,13 +6623,12 @@ static inline void uncharge_gather_clear(struct uncharge_gather *ug) static void uncharge_batch(const struct uncharge_gather *ug) { - unsigned long nr_pages = ug->nr_anon + ug->nr_file + ug->nr_kmem; unsigned long flags; if (!mem_cgroup_is_root(ug->memcg)) { - page_counter_uncharge(&ug->memcg->memory, nr_pages); + page_counter_uncharge(&ug->memcg->memory, ug->nr_pages); if (do_memsw_account()) - page_counter_uncharge(&ug->memcg->memsw, nr_pages); + page_counter_uncharge(&ug->memcg->memsw, ug->nr_pages); if (!cgroup_subsys_on_dfl(memory_cgrp_subsys) && ug->nr_kmem) page_counter_uncharge(&ug->memcg->kmem, ug->nr_kmem); memcg_oom_recover(ug->memcg); @@ -6640,16 +6640,18 @@ static void uncharge_batch(const struct uncharge_gather *ug) __mod_memcg_state(ug->memcg, MEMCG_RSS_HUGE, -ug->nr_huge); __mod_memcg_state(ug->memcg, NR_SHMEM, -ug->nr_shmem); __count_memcg_events(ug->memcg, PGPGOUT, ug->pgpgout); - __this_cpu_add(ug->memcg->vmstats_percpu->nr_page_events, nr_pages); + __this_cpu_add(ug->memcg->vmstats_percpu->nr_page_events, ug->nr_pages); memcg_check_events(ug->memcg, ug->dummy_page); local_irq_restore(flags); if (!mem_cgroup_is_root(ug->memcg)) - css_put_many(&ug->memcg->css, nr_pages); + css_put_many(&ug->memcg->css, ug->nr_pages); } static void uncharge_page(struct page *page, struct uncharge_gather *ug) { + unsigned long nr_pages; + VM_BUG_ON_PAGE(PageLRU(page), page); VM_BUG_ON_PAGE(page_count(page) && !is_zone_device_page(page) && !PageHWPoison(page) , page); @@ -6671,13 +6673,12 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug) ug->memcg = page->mem_cgroup; } - if (!PageKmemcg(page)) { - unsigned int nr_pages = 1; + nr_pages = compound_nr(page); + ug->nr_pages += nr_pages; - if (PageTransHuge(page)) { - nr_pages = compound_nr(page); + if (!PageKmemcg(page)) { + if (PageTransHuge(page)) ug->nr_huge += nr_pages; - } if (PageAnon(page)) ug->nr_anon += nr_pages; else { @@ -6687,7 +6688,7 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug) } ug->pgpgout++; } else { - ug->nr_kmem += compound_nr(page); + ug->nr_kmem += nr_pages; __ClearPageKmemcg(page); }