From patchwork Fri Sep 18 03:00:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 11783941 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7CF4D59D for ; Fri, 18 Sep 2020 03:01:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2D40B206B6 for ; Fri, 18 Sep 2020 03:01:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="TqxUT8oY" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2D40B206B6 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A7E41900008; Thu, 17 Sep 2020 23:01:18 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 9DBC88E0001; Thu, 17 Sep 2020 23:01:18 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 80A10900008; Thu, 17 Sep 2020 23:01:18 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0224.hostedemail.com [216.40.44.224]) by kanga.kvack.org (Postfix) with ESMTP id 628AB8E0001 for ; Thu, 17 Sep 2020 23:01:18 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 314258249980 for ; Fri, 18 Sep 2020 03:01:18 +0000 (UTC) X-FDA: 77274681036.22.wing41_030686627127 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin22.hostedemail.com (Postfix) with ESMTP id 0DD4D18038E60 for ; Fri, 18 Sep 2020 03:01:18 +0000 (UTC) X-Spam-Summary: 1,0,0,a160c0ee1cfd20e1,d41d8cd98f00b204,3_cjkxwykcnmnjo6zd5dd5a3.1dba7cjm-bb9kz19.dg5@flex--yuzhao.bounces.google.com,,RULES_HIT:2:41:152:355:379:541:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1437:1516:1518:1535:1593:1594:1605:1730:1747:1777:1792:2194:2199:2393:2559:2562:2895:3138:3139:3140:3141:3142:3152:3865:3866:3867:3868:3870:3871:3872:3874:4049:4120:4321:4605:5007:6119:6261:6653:6742:7903:8603:8660:9969:10004:11026:11473:11658:11914:12043:12296:12297:12438:12555:12663:12679:12895:12986:13148:13161:13180:13229:13230:14394:14659:21080:21444:21451:21627:21939:21990:30045:30054:30070,0,RBL:209.85.219.74:@flex--yuzhao.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100;04y88yce71fh6wz7uzeph18sz1rudoc6i3gknbzyueabw9gyrzepssbtj3wx8cz.5t6qtyfxhxiogrrt1wd6whb4ww3jfyz4p5sctp5waa3pghres1u4odixgmaup64.g-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom _rules:0 X-HE-Tag: wing41_030686627127 X-Filterd-Recvd-Size: 9720 Received: from mail-qv1-f74.google.com (mail-qv1-f74.google.com [209.85.219.74]) by imf33.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Sep 2020 03:01:17 +0000 (UTC) Received: by mail-qv1-f74.google.com with SMTP id l1so2915687qvr.0 for ; Thu, 17 Sep 2020 20:01:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=quTmQKmvj9k24MCP9HJuhYxbIPALZEbg5rUE1WUWNzk=; b=TqxUT8oYt7IQQ4RmJjkJOpm1JWoqZxlnUXsXQgBh+egnzkelbzRTLgjNf9GOR+cEcG bonHmsrnPcNfGIyqGwpPwKHY5FuxXVCCSLArnZ9/aEuOrhgHyRudqGSiNfAo7+L4tdtt 3nn0MhJDTVjNFI3ZmkmUcsqcJBB+CkGnyvl/QeqxULujicPL/JiQAEGNTFfDarxeH1tr P98fUXdsYPYz5+aTCuRwmwZBmBNltOwhJOidu4RN6Ts640bsPEJxMClpKt4LCEzlD6So eOJZVlXgnUMC8IDAM+Hqn96bifT33u18qim9fQjIRHmiVkhHylMD3k5rXKA8aUFKcm+n VYuA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=quTmQKmvj9k24MCP9HJuhYxbIPALZEbg5rUE1WUWNzk=; b=LiNt8NUUC7hvvyZu/ixogQ8dclm8asvnS41ZHHHt/SkUG8okJENc+35ug2JrEXCzZy ACJJZr5x/xQoRLLX1TV5VoxJvWBUoLH4+n+fPrVL72tESoFQLLILzUrYwPUKLcrxpfzq YXBlla5Zwd8R8E09jKffKHPDh4sOiVX6ksdgm7hxgj/tr7O4JWwwOFHvbPX9XBEBrADw lI4MOkqAmYnVpWvJLC2PSyM1VKEcxB18lAcncaCdMN09TY4YPEqVzOSumjdhNAh4HuFl 5cyGXYNdO7lSflrGBg+JqVgbHJE4ozCzWX43y9TCMFT9eBKbMRO7gVUK2oU8Zk6MvPkx g3hw== X-Gm-Message-State: AOAM531EQHMgB4Aky4XrvOnxMttWlsaBXGwfaR8w7S0Op7Lr9fnOqP51 SnyE3lSMj3O+v5yMu5RNalzDJZOJKvw= X-Google-Smtp-Source: ABdhPJx8Iy0UQqYwOk32k7vXKcaC3dm456ghlzIfQzdm4d+sSvCnRGu8u56jTkTxqUgi68jHOAURiox/7Hw= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:7220:84ff:fe09:2d90]) (user=yuzhao job=sendgmr) by 2002:ad4:5745:: with SMTP id q5mr31480664qvx.29.1600398076756; Thu, 17 Sep 2020 20:01:16 -0700 (PDT) Date: Thu, 17 Sep 2020 21:00:51 -0600 In-Reply-To: <20200918030051.650890-1-yuzhao@google.com> Message-Id: <20200918030051.650890-14-yuzhao@google.com> Mime-Version: 1.0 References: <20200918030051.650890-1-yuzhao@google.com> X-Mailer: git-send-email 2.28.0.681.g6f77f65b4e-goog Subject: [PATCH 13/13] mm: enlarge the int parameter of update_lru_size() From: Yu Zhao To: Andrew Morton , Michal Hocko Cc: Alex Shi , Steven Rostedt , Ingo Molnar , Johannes Weiner , Vladimir Davydov , Roman Gushchin , Shakeel Butt , Chris Down , Yafang Shao , Vlastimil Babka , Huang Ying , Pankaj Gupta , Matthew Wilcox , Konstantin Khlebnikov , Minchan Kim , Jaewon Kim , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yu Zhao X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In update_lru_sizes(), we call update_lru_size() with a long argument, whereas the callee only takes an int parameter. Though this isn't causing any overflow I'm aware of, it's not a good idea to go through the truncation since the underlying counters are already in long. This patch enlarges all relevant parameters on the path to the final underlying counters: update_lru_size(int -> long) if memcg: __mod_lruvec_state(int -> long) if smp: __mod_node_page_state(long) else: __mod_node_page_state(int -> long) __mod_memcg_lruvec_state(int -> long) __mod_memcg_state(int -> long) else: __mod_lruvec_state(int -> long) if smp: __mod_node_page_state(long) else: __mod_node_page_state(int -> long) __mod_zone_page_state(long) if memcg: mem_cgroup_update_lru_size(int -> long) Note that __mod_node_page_state() for the smp case and __mod_zone_page_state() already use long. So this change also fixes the inconsistency. Signed-off-by: Yu Zhao --- include/linux/memcontrol.h | 14 +++++++------- include/linux/mm_inline.h | 2 +- include/linux/vmstat.h | 2 +- mm/memcontrol.c | 10 +++++----- 4 files changed, 14 insertions(+), 14 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index d0b036123c6a..fcd1829f8382 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -621,7 +621,7 @@ static inline bool mem_cgroup_online(struct mem_cgroup *memcg) int mem_cgroup_select_victim_node(struct mem_cgroup *memcg); void mem_cgroup_update_lru_size(struct lruvec *lruvec, enum lru_list lru, - int zid, int nr_pages); + int zid, long nr_pages); static inline unsigned long mem_cgroup_get_zone_lru_size(struct lruvec *lruvec, @@ -707,7 +707,7 @@ static inline unsigned long memcg_page_state_local(struct mem_cgroup *memcg, return x; } -void __mod_memcg_state(struct mem_cgroup *memcg, int idx, int val); +void __mod_memcg_state(struct mem_cgroup *memcg, int idx, long val); /* idx can be of type enum memcg_stat_item or node_stat_item */ static inline void mod_memcg_state(struct mem_cgroup *memcg, @@ -790,9 +790,9 @@ static inline unsigned long lruvec_page_state_local(struct lruvec *lruvec, } void __mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, - int val); + long val); void __mod_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, - int val); + long val); void __mod_lruvec_slab_state(void *p, enum node_stat_item idx, int val); void mod_memcg_obj_state(void *p, int idx, int val); @@ -1166,7 +1166,7 @@ static inline unsigned long memcg_page_state_local(struct mem_cgroup *memcg, static inline void __mod_memcg_state(struct mem_cgroup *memcg, int idx, - int nr) + long nr) { } @@ -1201,12 +1201,12 @@ static inline unsigned long lruvec_page_state_local(struct lruvec *lruvec, } static inline void __mod_memcg_lruvec_state(struct lruvec *lruvec, - enum node_stat_item idx, int val) + enum node_stat_item idx, long val) { } static inline void __mod_lruvec_state(struct lruvec *lruvec, - enum node_stat_item idx, int val) + enum node_stat_item idx, long val) { __mod_node_page_state(lruvec_pgdat(lruvec), idx, val); } diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index 355ea1ee32bd..18e85071b44a 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -26,7 +26,7 @@ static inline int page_is_file_lru(struct page *page) static __always_inline void update_lru_size(struct lruvec *lruvec, enum lru_list lru, enum zone_type zid, - int nr_pages) + long nr_pages) { struct pglist_data *pgdat = lruvec_pgdat(lruvec); diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h index 91220ace31da..2ae35e8c45f0 100644 --- a/include/linux/vmstat.h +++ b/include/linux/vmstat.h @@ -310,7 +310,7 @@ static inline void __mod_zone_page_state(struct zone *zone, } static inline void __mod_node_page_state(struct pglist_data *pgdat, - enum node_stat_item item, int delta) + enum node_stat_item item, long delta) { node_page_state_add(delta, pgdat, item); } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index cfa6cbad21d5..11bc4bb36882 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -774,7 +774,7 @@ mem_cgroup_largest_soft_limit_node(struct mem_cgroup_tree_per_node *mctz) * @idx: the stat item - can be enum memcg_stat_item or enum node_stat_item * @val: delta to add to the counter, can be negative */ -void __mod_memcg_state(struct mem_cgroup *memcg, int idx, int val) +void __mod_memcg_state(struct mem_cgroup *memcg, int idx, long val) { long x, threshold = MEMCG_CHARGE_BATCH; @@ -812,7 +812,7 @@ parent_nodeinfo(struct mem_cgroup_per_node *pn, int nid) } void __mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, - int val) + long val) { struct mem_cgroup_per_node *pn; struct mem_cgroup *memcg; @@ -853,7 +853,7 @@ void __mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, * change of state at this level: per-node, per-cgroup, per-lruvec. */ void __mod_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, - int val) + long val) { /* Update node */ __mod_node_page_state(lruvec_pgdat(lruvec), idx, val); @@ -1354,7 +1354,7 @@ struct lruvec *mem_cgroup_page_lruvec(struct page *page, struct pglist_data *pgd * so as to allow it to check that lru_size 0 is consistent with list_empty). */ void mem_cgroup_update_lru_size(struct lruvec *lruvec, enum lru_list lru, - int zid, int nr_pages) + int zid, long nr_pages) { struct mem_cgroup_per_node *mz; unsigned long *lru_size; @@ -1371,7 +1371,7 @@ void mem_cgroup_update_lru_size(struct lruvec *lruvec, enum lru_list lru, size = *lru_size; if (WARN_ONCE(size < 0, - "%s(%p, %d, %d): lru_size %ld\n", + "%s(%p, %d, %ld): lru_size %ld\n", __func__, lruvec, lru, nr_pages, size)) { VM_BUG_ON(1); *lru_size = 0;