From patchwork Sat Apr 27 00:37:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shakeel Butt X-Patchwork-Id: 13645513 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6CAE2C4345F for ; Sat, 27 Apr 2024 00:37:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DEB436B0092; Fri, 26 Apr 2024 20:37:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D9A8A6B0093; Fri, 26 Apr 2024 20:37:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C3B3F6B0095; Fri, 26 Apr 2024 20:37:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id A38E36B0092 for ; Fri, 26 Apr 2024 20:37:49 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 5F7C7160836 for ; Sat, 27 Apr 2024 00:37:49 +0000 (UTC) X-FDA: 82053449058.18.A0AFD13 Received: from out-188.mta1.migadu.com (out-188.mta1.migadu.com [95.215.58.188]) by imf20.hostedemail.com (Postfix) with ESMTP id CDF0C1C0010 for ; Sat, 27 Apr 2024 00:37:47 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b="TTzxVCz/"; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf20.hostedemail.com: domain of shakeel.butt@linux.dev designates 95.215.58.188 as permitted sender) smtp.mailfrom=shakeel.butt@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1714178268; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=5HIEk0SwiGrn4sd8u1onunMHRY8Lr8R1eMIi9eNXAwg=; b=ar69QgRHDlQ54j1J6tLUwp6RytbKVu1eGON9nHZs47qyMaybqyrX7+TyYzqlpal9BPUl19 HswwJFPBrIqIC+trFaifg4SkuDnqDHoRY5RephGOCjfh50IdWYTmPayUr7wlWMeoA4XXeX GI/OM4e3JYLBvJWBssGaiZek2+CA27Y= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1714178268; a=rsa-sha256; cv=none; b=7wvn2TAt8y+va3uNtK2eHmpjR0Rq8xcYiXMw3r6z/hmwm/85jQmrYUIVtZHQkdL7SDOKWS 4wre57IjdJMPvwaZy1oSY2Bc6WmhWzKMp7pS6FOGzoVqUMQIZoE3sb7la1ZR87HZ9WgAlt IWJPQKC2wDw4M9U+19/nm2g7M1cvKSM= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b="TTzxVCz/"; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf20.hostedemail.com: domain of shakeel.butt@linux.dev designates 95.215.58.188 as permitted sender) smtp.mailfrom=shakeel.butt@linux.dev X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1714178266; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5HIEk0SwiGrn4sd8u1onunMHRY8Lr8R1eMIi9eNXAwg=; b=TTzxVCz/SkpxFmZQf92Tga9m8XT6tVu0QWSUn9K3qKo0E1mv96BJzNsXIz76H/EcjW0Psm aOZme5snQD4/yIdO02Ba+xutkIyUD+b/G5a8DIZz/9L/SuinYiiA5H2V5R0NuZ+SUXmxdq n/1LeeGxekFuaPZVKjK+TbCPzXMu6FA= From: Shakeel Butt To: Andrew Morton , Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 2/7] memcg: dynamically allocate lruvec_stats Date: Fri, 26 Apr 2024 17:37:28 -0700 Message-ID: <20240427003733.3898961-3-shakeel.butt@linux.dev> In-Reply-To: <20240427003733.3898961-1-shakeel.butt@linux.dev> References: <20240427003733.3898961-1-shakeel.butt@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: CDF0C1C0010 X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: f38x6cj6yk5dkcecks9xfsoxcg6knatp X-HE-Tag: 1714178267-235413 X-HE-Meta: U2FsdGVkX1+UXPv1K820PDcoyh63N1N3nAFk/BB1jODG51yxxuMNIr/HuTDpD9XpIAHv+TmFIAo6ENrtaFoIgO7TpRJXES1Kbmf0PvciNJfyOqlL2xTQHKDx2MFw/+X+FqKtyFVGskZlXrbTuKgVm5C6baRkqAg+qa1CNm06AKvuHjKQ3C+o42Sawvj1fQeaEmZy6GhnOe9odNvrckbm7mvXpkyq5fMLQugyJz5ImCVUwPz/oVhVYdz8D0dCrJYTbLdGZu4j4ecOiewRzHZc5L5zw8oL5ZeFMCME41y1tcr2mh+oifXI907CqwYr7fUrobud7xpVyfRou7buVqZ7UQ9yRHp+neXyWWCRVQn5bqzew2xN3U7ZTnrNRKcTXHfu2XiEuniumEDNvwkBuYIbibbrEtdWDTTK8FYlBa3vMv9Z/aGcM9P/b/GpH0YGhxl5srt2JS9+rvDaU7QvztFD9kaftFcKMzViI8zKgQFkotXwKck+tKEBfRWedqgZTfiwU95n91tFd6CP9aZqiha74T5lo+y24LdphPaH8ECyAscdLiU7fAt7nPF66oB7lNCgdVB9O71g7JZYNwAU0IxL8uLauUS+vXVV3J5+5LkVl0tBGvkrcrEYzmIi+HwVCCam3tF/nieVLffqo1tSPKvYYWpzkdqX+ITh9zgCZhcT286wR2RuTghDZdThQJLwRC5qsO2Tky/0rWKQWJzWXZwLL0adPo29DubkRVQTA64qKx/8C6Bv7/V7Ku9+LkymlgbGEUR8T0sZFuPWe4FwfilRC6OrgeMV5LzrDAMAvJlBg688CxbaaBANQZujvwtphasMXWQ3bYtqAqerHAsECRYzMXSxidbS1tfrNNNylEOyoA0M5KBuXjCV6D3QmAZTUCmK X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: To decouple the dependency of lruvec_stats on NR_VM_NODE_STAT_ITEMS, we need to dynamically allocate lruvec_stats in the mem_cgroup_per_node structure. Also move the definition of lruvec_stats_percpu and lruvec_stats and related functions to the memcontrol.c to facilitate later patches. No functional changes in the patch. Signed-off-by: Shakeel Butt Signed-off-by: Roman Gushchin Acked-by: Shakeel Butt --- include/linux/memcontrol.h | 62 +++------------------------ mm/memcontrol.c | 87 ++++++++++++++++++++++++++++++++------ 2 files changed, 81 insertions(+), 68 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 9aba0d0462ca..ab8a6e884375 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -83,6 +83,8 @@ enum mem_cgroup_events_target { struct memcg_vmstats_percpu; struct memcg_vmstats; +struct lruvec_stats_percpu; +struct lruvec_stats; struct mem_cgroup_reclaim_iter { struct mem_cgroup *position; @@ -90,25 +92,6 @@ struct mem_cgroup_reclaim_iter { unsigned int generation; }; -struct lruvec_stats_percpu { - /* Local (CPU and cgroup) state */ - long state[NR_VM_NODE_STAT_ITEMS]; - - /* Delta calculation for lockless upward propagation */ - long state_prev[NR_VM_NODE_STAT_ITEMS]; -}; - -struct lruvec_stats { - /* Aggregated (CPU and subtree) state */ - long state[NR_VM_NODE_STAT_ITEMS]; - - /* Non-hierarchical (CPU aggregated) state */ - long state_local[NR_VM_NODE_STAT_ITEMS]; - - /* Pending child counts during tree propagation */ - long state_pending[NR_VM_NODE_STAT_ITEMS]; -}; - /* * per-node information in memory controller. */ @@ -116,7 +99,7 @@ struct mem_cgroup_per_node { struct lruvec lruvec; struct lruvec_stats_percpu __percpu *lruvec_stats_percpu; - struct lruvec_stats lruvec_stats; + struct lruvec_stats *lruvec_stats; unsigned long lru_zone_size[MAX_NR_ZONES][NR_LRU_LISTS]; @@ -1037,42 +1020,9 @@ static inline void mod_memcg_page_state(struct page *page, } unsigned long memcg_page_state(struct mem_cgroup *memcg, int idx); - -static inline unsigned long lruvec_page_state(struct lruvec *lruvec, - enum node_stat_item idx) -{ - struct mem_cgroup_per_node *pn; - long x; - - if (mem_cgroup_disabled()) - return node_page_state(lruvec_pgdat(lruvec), idx); - - pn = container_of(lruvec, struct mem_cgroup_per_node, lruvec); - x = READ_ONCE(pn->lruvec_stats.state[idx]); -#ifdef CONFIG_SMP - if (x < 0) - x = 0; -#endif - return x; -} - -static inline unsigned long lruvec_page_state_local(struct lruvec *lruvec, - enum node_stat_item idx) -{ - struct mem_cgroup_per_node *pn; - long x = 0; - - if (mem_cgroup_disabled()) - return node_page_state(lruvec_pgdat(lruvec), idx); - - pn = container_of(lruvec, struct mem_cgroup_per_node, lruvec); - x = READ_ONCE(pn->lruvec_stats.state_local[idx]); -#ifdef CONFIG_SMP - if (x < 0) - x = 0; -#endif - return x; -} +unsigned long lruvec_page_state(struct lruvec *lruvec, enum node_stat_item idx); +unsigned long lruvec_page_state_local(struct lruvec *lruvec, + enum node_stat_item idx); void mem_cgroup_flush_stats(struct mem_cgroup *memcg); void mem_cgroup_flush_stats_ratelimited(struct mem_cgroup *memcg); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 53769d06053f..5e337ed6c6bf 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -576,6 +576,60 @@ mem_cgroup_largest_soft_limit_node(struct mem_cgroup_tree_per_node *mctz) return mz; } +struct lruvec_stats_percpu { + /* Local (CPU and cgroup) state */ + long state[NR_VM_NODE_STAT_ITEMS]; + + /* Delta calculation for lockless upward propagation */ + long state_prev[NR_VM_NODE_STAT_ITEMS]; +}; + +struct lruvec_stats { + /* Aggregated (CPU and subtree) state */ + long state[NR_VM_NODE_STAT_ITEMS]; + + /* Non-hierarchical (CPU aggregated) state */ + long state_local[NR_VM_NODE_STAT_ITEMS]; + + /* Pending child counts during tree propagation */ + long state_pending[NR_VM_NODE_STAT_ITEMS]; +}; + +unsigned long lruvec_page_state(struct lruvec *lruvec, enum node_stat_item idx) +{ + struct mem_cgroup_per_node *pn; + long x; + + if (mem_cgroup_disabled()) + return node_page_state(lruvec_pgdat(lruvec), idx); + + pn = container_of(lruvec, struct mem_cgroup_per_node, lruvec); + x = READ_ONCE(pn->lruvec_stats->state[idx]); +#ifdef CONFIG_SMP + if (x < 0) + x = 0; +#endif + return x; +} + +unsigned long lruvec_page_state_local(struct lruvec *lruvec, + enum node_stat_item idx) +{ + struct mem_cgroup_per_node *pn; + long x = 0; + + if (mem_cgroup_disabled()) + return node_page_state(lruvec_pgdat(lruvec), idx); + + pn = container_of(lruvec, struct mem_cgroup_per_node, lruvec); + x = READ_ONCE(pn->lruvec_stats->state_local[idx]); +#ifdef CONFIG_SMP + if (x < 0) + x = 0; +#endif + return x; +} + /* Subset of vm_event_item to report for memcg event stats */ static const unsigned int memcg_vm_event_stat[] = { PGPGIN, @@ -5492,18 +5546,25 @@ static int alloc_mem_cgroup_per_node_info(struct mem_cgroup *memcg, int node) if (!pn) return 1; + pn->lruvec_stats = kzalloc_node(sizeof(struct lruvec_stats), GFP_KERNEL, + node); + if (!pn->lruvec_stats) + goto fail; + pn->lruvec_stats_percpu = alloc_percpu_gfp(struct lruvec_stats_percpu, GFP_KERNEL_ACCOUNT); - if (!pn->lruvec_stats_percpu) { - kfree(pn); - return 1; - } + if (!pn->lruvec_stats_percpu) + goto fail; lruvec_init(&pn->lruvec); pn->memcg = memcg; memcg->nodeinfo[node] = pn; return 0; +fail: + kfree(pn->lruvec_stats); + kfree(pn); + return 1; } static void free_mem_cgroup_per_node_info(struct mem_cgroup *memcg, int node) @@ -5514,6 +5575,7 @@ static void free_mem_cgroup_per_node_info(struct mem_cgroup *memcg, int node) return; free_percpu(pn->lruvec_stats_percpu); + kfree(pn->lruvec_stats); kfree(pn); } @@ -5866,18 +5928,19 @@ static void mem_cgroup_css_rstat_flush(struct cgroup_subsys_state *css, int cpu) for_each_node_state(nid, N_MEMORY) { struct mem_cgroup_per_node *pn = memcg->nodeinfo[nid]; - struct mem_cgroup_per_node *ppn = NULL; + struct lruvec_stats *lstats = pn->lruvec_stats; + struct lruvec_stats *plstats = NULL; struct lruvec_stats_percpu *lstatc; if (parent) - ppn = parent->nodeinfo[nid]; + plstats = parent->nodeinfo[nid]->lruvec_stats; lstatc = per_cpu_ptr(pn->lruvec_stats_percpu, cpu); for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++) { - delta = pn->lruvec_stats.state_pending[i]; + delta = lstats->state_pending[i]; if (delta) - pn->lruvec_stats.state_pending[i] = 0; + lstats->state_pending[i] = 0; delta_cpu = 0; v = READ_ONCE(lstatc->state[i]); @@ -5888,12 +5951,12 @@ static void mem_cgroup_css_rstat_flush(struct cgroup_subsys_state *css, int cpu) } if (delta_cpu) - pn->lruvec_stats.state_local[i] += delta_cpu; + lstats->state_local[i] += delta_cpu; if (delta) { - pn->lruvec_stats.state[i] += delta; - if (ppn) - ppn->lruvec_stats.state_pending[i] += delta; + lstats->state[i] += delta; + if (plstats) + plstats->state_pending[i] += delta; } } }