From patchwork Wed May 27 18:29:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shakeel Butt X-Patchwork-Id: 11573757 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 85CA6739 for ; Wed, 27 May 2020 18:29:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4F76120899 for ; Wed, 27 May 2020 18:29:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="aRqm/ceE" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4F76120899 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8BFD6800B6; Wed, 27 May 2020 14:29:54 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 871FA80010; Wed, 27 May 2020 14:29:54 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7AD52800B6; Wed, 27 May 2020 14:29:54 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0160.hostedemail.com [216.40.44.160]) by kanga.kvack.org (Postfix) with ESMTP id 649E780010 for ; Wed, 27 May 2020 14:29:54 -0400 (EDT) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 1668E196C3B for ; Wed, 27 May 2020 18:29:54 +0000 (UTC) X-FDA: 76863337908.07.name20_1f0ee7426d54 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin07.hostedemail.com (Postfix) with ESMTP id E5D611803F9A8 for ; Wed, 27 May 2020 18:29:53 +0000 (UTC) X-Spam-Summary: 2,0,0,6fca0e4543a28736,d41d8cd98f00b204,3olhoxggkcdspexhbbiydlldib.zljifkru-jjhsxzh.lod@flex--shakeelb.bounces.google.com,,RULES_HIT:41:152:355:379:541:800:960:966:973:988:989:1260:1277:1313:1314:1345:1437:1516:1518:1535:1543:1593:1594:1711:1730:1747:1777:1792:2194:2196:2199:2200:2393:2559:2562:2895:3138:3139:3140:3141:3142:3152:3354:3865:3866:3867:3868:3871:3874:4250:4321:4385:4605:5007:6261:6653:8957:9969:10004:10400:10450:10455:11026:11473:11658:11914:12043:12048:12296:12297:12438:12555:12679:12895:13161:13229:13870:14096:14097:14181:14394:14659:14721:19904:19999:21080:21220:21444:21451:21627:21966:30054:30070,0,RBL:209.85.215.201:@flex--shakeelb.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: name20_1f0ee7426d54 X-Filterd-Recvd-Size: 5986 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) by imf07.hostedemail.com (Postfix) with ESMTP for ; Wed, 27 May 2020 18:29:53 +0000 (UTC) Received: by mail-pg1-f201.google.com with SMTP id 14so20142829pgm.3 for ; Wed, 27 May 2020 11:29:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:message-id:mime-version:subject:from:to:cc; bh=vLBHFyxDYq9Ns3YulLEQdnA3nE4XR0yQruh+9zYely8=; b=aRqm/ceEdrK7ztLIHW5n2QXxSGd4P1cmo1H98UyLtYRlbW83fAof/5IEwY56a8PCTU R97ROWa1whjZvyH+/ANH6vmk/8PQpO25d6JvFZ40KMzybbs682m+Bfmc95DCGsZSAKmr 7SRHmwg6G/GB5pojDtQV3GBj/N/wcsJQGDclK/AomZRsNdxgKU4qi5TfKXa4Rf5jGEZP vNjDriTMZFsASZic59rUUDjVTyPjyDZb7OR4aF9SoN0Snmzl1rpz8RA0sbDIyR/7H8Wk IZiMRrOKF/oSmeoPZJKY34xuM3kUb1oFMbF6VnU6ltRvM2p3BxqsUOEuapMZSav/7AK6 eDEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=vLBHFyxDYq9Ns3YulLEQdnA3nE4XR0yQruh+9zYely8=; b=oTtrewj8kvc+VgpCJNpS2FZ6RN/o++uSMtkbTBZPRfvh6RuYJecvT5jpMCFGlSyfhc iA3++84WQpohtFYpt13LOZ2Gc3gYZipWkeQnLqJ7Ilue8aGY0TJbQvQ2cRSFSA6fLSPT VGshVxXAjOj6qqe/rWDSx1X3vd7DLM7h6nGLrxkP5gVgcPcuu3tSmWImHPM3kaVFEhOa YFBg8A6X/H0CF5QwTlMxFF0emAIqQGo2RY9NJqaLjuk3Zs/UtPfZYC9k65N4unxsnC/P ST6GBJhzTG+HJ4FUsgE5bG4ctscfUubAESHhkeB9NZy5U8W6d8FB0a3MwUjO8Mr0c7nt IHoA== X-Gm-Message-State: AOAM530X3wujZBIvvw+Vrh/Fe/TVbVvxxMx4cRqHVluaIHS1wHQcXdUM WD6oy1Y5+hTnGT80as9XXDQtr9UDwtHOZA== X-Google-Smtp-Source: ABdhPJx9poQ4/3XGCJwa1gTm9TpLPhbUmHfAU2RkxLT2xzzGwLxdBq04/4zyrOtLRfIJPx6PaayXwULP0topdg== X-Received: by 2002:a17:90a:6546:: with SMTP id f6mr6349345pjs.55.1590604192056; Wed, 27 May 2020 11:29:52 -0700 (PDT) Date: Wed, 27 May 2020 11:29:47 -0700 Message-Id: <20200527182947.251343-1-shakeelb@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.27.0.rc0.183.gde8f92d652-goog Subject: [PATCH resend 2/3] mm: swap: memcg: fix memcg stats for huge pages From: Shakeel Butt To: Mel Gorman , Johannes Weiner , Roman Gushchin , Michal Hocko Cc: Andrew Morton , Minchan Kim , Rik van Riel , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Shakeel Butt X-Rspamd-Queue-Id: E5D611803F9A8 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The commit 2262185c5b28 ("mm: per-cgroup memory reclaim stats") added PGLAZYFREE, PGACTIVATE & PGDEACTIVATE stats for cgroups but missed couple of places and PGLAZYFREE missed huge page handling. Fix that. Also for PGLAZYFREE use the irq-unsafe function to update as the irq is already disabled. Fixes: 2262185c5b28 ("mm: per-cgroup memory reclaim stats") Signed-off-by: Shakeel Butt Acked-by: Johannes Weiner --- mm/swap.c | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-) diff --git a/mm/swap.c b/mm/swap.c index 3dbef6517cac..4eb179ee0b72 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -278,6 +278,7 @@ static void __activate_page(struct page *page, struct lruvec *lruvec, if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) { int file = page_is_file_lru(page); int lru = page_lru_base_type(page); + int nr_pages = hpage_nr_pages(page); del_page_from_lru_list(page, lruvec, lru); SetPageActive(page); @@ -285,7 +286,8 @@ static void __activate_page(struct page *page, struct lruvec *lruvec, add_page_to_lru_list(page, lruvec, lru); trace_mm_lru_activate(page); - __count_vm_events(PGACTIVATE, hpage_nr_pages(page)); + __count_vm_events(PGACTIVATE, nr_pages); + __count_memcg_events(lruvec_memcg(lruvec), PGACTIVATE, nr_pages); update_page_reclaim_stat(lruvec, file, 1); } } @@ -540,8 +542,10 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec, __count_vm_events(PGROTATED, nr_pages); } - if (active) + if (active) { __count_vm_events(PGDEACTIVATE, nr_pages); + __count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_pages); + } update_page_reclaim_stat(lruvec, file, 0); } @@ -551,13 +555,15 @@ static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec, if (PageLRU(page) && PageActive(page) && !PageUnevictable(page)) { int file = page_is_file_lru(page); int lru = page_lru_base_type(page); + int nr_pages = hpage_nr_pages(page); del_page_from_lru_list(page, lruvec, lru + LRU_ACTIVE); ClearPageActive(page); ClearPageReferenced(page); add_page_to_lru_list(page, lruvec, lru); - __count_vm_events(PGDEACTIVATE, hpage_nr_pages(page)); + __count_vm_events(PGDEACTIVATE, nr_pages); + __count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_pages); update_page_reclaim_stat(lruvec, file, 0); } } @@ -568,6 +574,7 @@ static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec, if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) && !PageSwapCache(page) && !PageUnevictable(page)) { bool active = PageActive(page); + int nr_pages = hpage_nr_pages(page); del_page_from_lru_list(page, lruvec, LRU_INACTIVE_ANON + active); @@ -581,8 +588,8 @@ static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec, ClearPageSwapBacked(page); add_page_to_lru_list(page, lruvec, LRU_INACTIVE_FILE); - __count_vm_events(PGLAZYFREE, hpage_nr_pages(page)); - count_memcg_page_event(page, PGLAZYFREE); + __count_vm_events(PGLAZYFREE, nr_pages); + __count_memcg_events(lruvec_memcg(lruvec), PGLAZYFREE, nr_pages); update_page_reclaim_stat(lruvec, 1, 0); } }