From patchwork Sat Oct 26 09:34:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiu Jianfeng X-Patchwork-Id: 13852162 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2273CD10BF4 for ; Sat, 26 Oct 2024 09:45:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 28C046B0085; Sat, 26 Oct 2024 05:45:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 23CEE6B0088; Sat, 26 Oct 2024 05:45:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 104D86B0089; Sat, 26 Oct 2024 05:45:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id E6BD56B0085 for ; Sat, 26 Oct 2024 05:45:09 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 5F48AA0323 for ; Sat, 26 Oct 2024 09:44:32 +0000 (UTC) X-FDA: 82715269560.04.F20FA7B Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) by imf13.hostedemail.com (Postfix) with ESMTP id 67B3720024 for ; Sat, 26 Oct 2024 09:44:43 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=none; spf=pass (imf13.hostedemail.com: domain of xiujianfeng@huaweicloud.com designates 45.249.212.51 as permitted sender) smtp.mailfrom=xiujianfeng@huaweicloud.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1729935855; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references; bh=DOjJr5nB2HiZomxh6q/Whv2mtIJj7Nr4I5aE1nMpsyQ=; b=l2Nejxw6ja3Coca8NJnPYGRgNUPULlOjZCIZmQ4CzcoF1qQFOg0FUhzIBtfHLEZidGs48H cZcKQX53ke+ZAAIkL7X0IKRu5Q3s/2/ugXH4bKkWF24MgkDkgx7ujMrbcJI8EoAvqR5gNt MRTPtywVUik+2jUgeUW0do7RFqhdb1k= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=none; spf=pass (imf13.hostedemail.com: domain of xiujianfeng@huaweicloud.com designates 45.249.212.51 as permitted sender) smtp.mailfrom=xiujianfeng@huaweicloud.com; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1729935855; a=rsa-sha256; cv=none; b=VsUPlOwW/J6SsgQHrUHm87yitWAo5pPJU4NptWOeqfhWFTwztysPUSEUgzy5nJJPIyfrG4 g4N/yZdEHCdb45HOWnu8CJreE6JNv+nibiOhjg7KXumR9lfublfKl8dGNaHik+r2IbPYMC xJPz7e+U05VhaNTrDNWyUNbzfw/fDDk= Received: from mail.maildlp.com (unknown [172.19.163.235]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4XbF9n1G5mz4f3jtY for ; Sat, 26 Oct 2024 17:44:45 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id 989EE1A058E for ; Sat, 26 Oct 2024 17:44:57 +0800 (CST) Received: from huaweicloud.com (unknown [10.67.174.26]) by APP4 (Coremail) with SMTP id gCh0CgAXl8MYuhxncHuqFA--.5222S2; Sat, 26 Oct 2024 17:44:57 +0800 (CST) From: Xiu Jianfeng To: hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, akpm@linux-foundation.org Cc: cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, wangweiyang2@huawei.com Subject: [PATCH -next] memcg: factor out mem_cgroup_stat_aggregate() Date: Sat, 26 Oct 2024 09:34:07 +0000 Message-Id: <20241026093407.310955-1-xiujianfeng@huaweicloud.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-CM-TRANSID: gCh0CgAXl8MYuhxncHuqFA--.5222S2 X-Coremail-Antispam: 1UD129KBjvJXoWxtryxGF4xGF4rGF4rAw1fWFg_yoW7uw1fpr ZxC343KF4UJw4DGa1fKa4UX34fZ34fXayUCFZ8ArySkF13tr1rZr10q34jvry5CFZxX34S vr4UKw1UC3y8AFDanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUvYb4IE77IF4wAFF20E14v26r4j6ryUM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28lY4IEw2IIxxk0rwA2F7IY1VAKz4 vEj48ve4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_Xr0_Ar1l84ACjcxK6xIIjxv20xvEc7Cj xVAFwI0_Cr0_Gr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I 0E14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40E x7xfMcIj6xIIjxv20xvE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x 0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lFIxGxcIEc7CjxVA2Y2ka0xkIwI1lc7CjxVAa w2AFwI0_Jw0_GFyl42xK82IYc2Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxV Aqx4xG67AKxVWUJVWUGwC20s026x8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r1q 6r43MIIYrxkI7VAKI48JMIIF0xvE2Ix0cI8IcVAFwI0_Jr0_JF4lIxAIcVC0I7IYx2IY6x kF7I0E14v26r1j6r4UMIIF0xvE42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AK xVWUJVW8JwCI42IY6I8E87Iv6xkF7I0E14v26r1j6r4UYxBIdaVFxhVjvjDU0xZFpf9x07 UAwIDUUUUU= X-CM-SenderInfo: x0lxyxpdqiv03j6k3tpzhluzxrxghudrp/ X-Rspam-User: X-Stat-Signature: e8k1wfmfuswifsp9xq9ucsesbumyfpzg X-Rspamd-Queue-Id: 67B3720024 X-Rspamd-Server: rspam11 X-HE-Tag: 1729935883-214513 X-HE-Meta: U2FsdGVkX1/iCr/CpFnJ+2FNP5wyRuP+9pnT4D7IXDEXMJLB2NgYRUEVV3SuzDIl0rlN1/FX1Zbn5eX6AdWShEQ3FSPDAihHSA6O1DIVdfzZEM3kZCHIVI7Tug24bDrOCIjJ+q/0Ca+2sG0KQtg/OzHb747QbnosL/vp+cadsWHmWuMpA8DkVmqNM2HNHN3NuVE7I9zwDIdAasAUvQzhtBT3LFLRw+Vk0/q97qTV92Qomk5ULCCTG0FNwf/L6bAIiNpUFJdniQ0V3XkZeXLBXPI6yVdBKi9d2WYUngBvtyfljP5xpcFnoyxGe842pwd3uU4aG52aoGTcbj9Zt93RKtNCbsBJ9M7dpg5+VqxfK10hmlPknzgVHPTruYIl8fBie+FJJjFNi+BfsL49v94wIlVriVKssagfRa9ESyGIzbbgwE+VwalnPHgjQtJeEbzapDAUJ1PxRdcMLmlBXoNMkUaqXmE0DrjwKm1bHFmfRPjJ+FvXXjj3XpB+cwlFR7kMkfbrkzuZ8Mq7ZIW/SVmjbUOrzu+t4YDSmyDwyeOUE7O9Kip3poZXQ4utqUcw9G9gQudmQQMDShs/wGsQUEHqvrSs9CIIItmfMl0eJ0scqXVir2gpc8Anv721j62o7jryN5l4Ly7qD6Scaen6SKJn+9rEl4d3poisoUh+6nmWsQZvtzuGhgUFqi6VjknV3Xy0g9K0i+yDkWT9hDtvr4dh1cHPGtErUhwCKVu6hU+mhCvrjidruqk73giszbFHYfxbfYFR8SZnHInkbk01pxUwqmg0Zjjox2VNoRgYmDHYF/q+DqUkUugtf72hBuRHSgiJLslGWmNTF5jRFPncFQ9JyK/hfjkp0qOpQ50I80oHpYSM2q61ym9puk7i+4wwiyaLII2A1G85M3fOwtVYIg5OGXZr9xIUB2rMIW7cahHOZcQ48sP3Lw3RnHrZKl+GXNvQcdL1vSn0yFzWYFNM6Nb G2nQBRhW thi9wtogG4tjRJpsORlowmqllvfv5zLEnKkKDAfIbV/UAJ09QfnNemBK7VmLtrLP2maQsX8fCqIoKfXDI6Coj0uXKarTRysK4I3sT6UGBulqtnOD7C/DFaDheAFMl/lzZwtL5/ec+fE2hhvB3H9sZZomYSUA8vFjsH5+ypCoSAfLGPzC/ANSx1lMsqg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Xiu Jianfeng Currently mem_cgroup_css_rstat_flush() is used to flush the per-CPU statistics from a specified CPU into the global statistics of the memcg. It processes three kinds of data in three for loops using exactly the same method. Therefore, the for loop can be factored out and may make the code more clean. Signed-off-by: Xiu Jianfeng --- mm/memcontrol.c | 129 ++++++++++++++++++++++++++---------------------- 1 file changed, 70 insertions(+), 59 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 17af08367c68..c3ae13c8f6fa 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3738,68 +3738,90 @@ static void mem_cgroup_css_reset(struct cgroup_subsys_state *css) memcg_wb_domain_size_changed(memcg); } -static void mem_cgroup_css_rstat_flush(struct cgroup_subsys_state *css, int cpu) +struct aggregate_control { + /* pointer to the aggregated (CPU and subtree aggregated) counters */ + long *aggregate; + /* pointer to the non-hierarchichal (CPU aggregated) counters */ + long *local; + /* pointer to the pending child counters during tree propagation */ + long *pending; + /* pointer to the parent's pending counters, could be NULL */ + long *ppending; + /* pointer to the percpu counters to be aggregated */ + long *cstat; + /* pointer to the percpu counters of the last aggregation*/ + long *cstat_prev; + /* size of the above counters */ + int size; +}; + +static void mem_cgroup_stat_aggregate(struct aggregate_control *ac) { - struct mem_cgroup *memcg = mem_cgroup_from_css(css); - struct mem_cgroup *parent = parent_mem_cgroup(memcg); - struct memcg_vmstats_percpu *statc; + int i; long delta, delta_cpu, v; - int i, nid; - - statc = per_cpu_ptr(memcg->vmstats_percpu, cpu); - for (i = 0; i < MEMCG_VMSTAT_SIZE; i++) { + for (i = 0; i < ac->size; i++) { /* * Collect the aggregated propagation counts of groups * below us. We're in a per-cpu loop here and this is * a global counter, so the first cycle will get them. */ - delta = memcg->vmstats->state_pending[i]; + delta = ac->pending[i]; if (delta) - memcg->vmstats->state_pending[i] = 0; + ac->pending[i] = 0; /* Add CPU changes on this level since the last flush */ delta_cpu = 0; - v = READ_ONCE(statc->state[i]); - if (v != statc->state_prev[i]) { - delta_cpu = v - statc->state_prev[i]; + v = READ_ONCE(ac->cstat[i]); + if (v != ac->cstat_prev[i]) { + delta_cpu = v - ac->cstat_prev[i]; delta += delta_cpu; - statc->state_prev[i] = v; + ac->cstat_prev[i] = v; } /* Aggregate counts on this level and propagate upwards */ if (delta_cpu) - memcg->vmstats->state_local[i] += delta_cpu; + ac->local[i] += delta_cpu; if (delta) { - memcg->vmstats->state[i] += delta; - if (parent) - parent->vmstats->state_pending[i] += delta; + ac->aggregate[i] += delta; + if (ac->ppending) + ac->ppending[i] += delta; } } +} - for (i = 0; i < NR_MEMCG_EVENTS; i++) { - delta = memcg->vmstats->events_pending[i]; - if (delta) - memcg->vmstats->events_pending[i] = 0; - - delta_cpu = 0; - v = READ_ONCE(statc->events[i]); - if (v != statc->events_prev[i]) { - delta_cpu = v - statc->events_prev[i]; - delta += delta_cpu; - statc->events_prev[i] = v; - } +static void mem_cgroup_css_rstat_flush(struct cgroup_subsys_state *css, int cpu) +{ + struct mem_cgroup *memcg = mem_cgroup_from_css(css); + struct mem_cgroup *parent = parent_mem_cgroup(memcg); + struct memcg_vmstats_percpu *statc; + struct aggregate_control ac; + int nid; - if (delta_cpu) - memcg->vmstats->events_local[i] += delta_cpu; + statc = per_cpu_ptr(memcg->vmstats_percpu, cpu); - if (delta) { - memcg->vmstats->events[i] += delta; - if (parent) - parent->vmstats->events_pending[i] += delta; - } - } + ac = (struct aggregate_control) { + .aggregate = memcg->vmstats->state, + .local = memcg->vmstats->state_local, + .pending = memcg->vmstats->state_pending, + .ppending = parent ? parent->vmstats->state_pending : NULL, + .cstat = statc->state, + .cstat_prev = statc->state_prev, + .size = MEMCG_VMSTAT_SIZE, + }; + mem_cgroup_stat_aggregate(&ac); + + ac = (struct aggregate_control) { + .aggregate = memcg->vmstats->events, + .local = memcg->vmstats->events_local, + .pending = memcg->vmstats->events_pending, + .ppending = parent ? parent->vmstats->events_pending : NULL, + .cstat = statc->events, + .cstat_prev = statc->events_prev, + .size = NR_MEMCG_EVENTS, + }; + mem_cgroup_stat_aggregate(&ac); for_each_node_state(nid, N_MEMORY) { struct mem_cgroup_per_node *pn = memcg->nodeinfo[nid]; @@ -3812,28 +3834,17 @@ static void mem_cgroup_css_rstat_flush(struct cgroup_subsys_state *css, int cpu) lstatc = per_cpu_ptr(pn->lruvec_stats_percpu, cpu); - for (i = 0; i < NR_MEMCG_NODE_STAT_ITEMS; i++) { - delta = lstats->state_pending[i]; - if (delta) - lstats->state_pending[i] = 0; - - delta_cpu = 0; - v = READ_ONCE(lstatc->state[i]); - if (v != lstatc->state_prev[i]) { - delta_cpu = v - lstatc->state_prev[i]; - delta += delta_cpu; - lstatc->state_prev[i] = v; - } - - if (delta_cpu) - lstats->state_local[i] += delta_cpu; + ac = (struct aggregate_control) { + .aggregate = lstats->state, + .local = lstats->state_local, + .pending = lstats->state_pending, + .ppending = plstats ? plstats->state_pending : NULL, + .cstat = lstatc->state, + .cstat_prev = lstatc->state_prev, + .size = NR_MEMCG_NODE_STAT_ITEMS, + }; + mem_cgroup_stat_aggregate(&ac); - if (delta) { - lstats->state[i] += delta; - if (plstats) - plstats->state_pending[i] += delta; - } - } } WRITE_ONCE(statc->stats_updates, 0); /* We are in a per-cpu loop here, only do the atomic write once */