From patchwork Mon Mar 13 16:25:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcelo Tosatti X-Patchwork-Id: 13172855 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E874AC6FD1F for ; Mon, 13 Mar 2023 16:27:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 525236B0082; Mon, 13 Mar 2023 12:27:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 439FC6B0088; Mon, 13 Mar 2023 12:27:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 100A56B0089; Mon, 13 Mar 2023 12:27:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id E91F16B0082 for ; Mon, 13 Mar 2023 12:27:49 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 9E2CCC0C8E for ; Mon, 13 Mar 2023 16:27:49 +0000 (UTC) X-FDA: 80564406258.09.0759D6D Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf02.hostedemail.com (Postfix) with ESMTP id D186F80016 for ; Mon, 13 Mar 2023 16:27:47 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="Dz/0gAI8"; spf=pass (imf02.hostedemail.com: domain of mtosatti@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=mtosatti@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678724867; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references:references:dkim-signature; bh=HRaKPhoc2o/dyfmAGwpM6cYdTDahEMUEDBWaTmQOBWk=; b=Y98BzGh906ZfcdWe12XEJ5igoROr9T6Inu/3YXH6CgVSkYRxdbUtgsL0tDrJ0Q6P8LgFjd fFKa4J0MbQ8DkqkvTX4yB1b9SS/P5Nym9EKiB6pkeRGak3n+qhcftJIBnL85wVcjg6w9kI piItwDCZjWBniOvLcwkvTLxidBgzHn8= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="Dz/0gAI8"; spf=pass (imf02.hostedemail.com: domain of mtosatti@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=mtosatti@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678724867; a=rsa-sha256; cv=none; b=0WYIszpuj3uUgorfW/5IslwLG5djnq3d3/WcUW/zQAHuWfZ+5o5J9NknaN5wZ6agfNlqzb Q2tp7PbOKtp8xWLy5Nk8Hjm+Vj0W3mKhQcTXAiW4q0xa+iTZR0iOEBwp6b+svW8Bw6VNNt L4cEZPsekduaJRlgwE0ie8Rrb84pTQY= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1678724867; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=HRaKPhoc2o/dyfmAGwpM6cYdTDahEMUEDBWaTmQOBWk=; b=Dz/0gAI8paHTQ/E8bs/pLVXSYC00W0SwtbM08A2MTPRXWPO0KLKL0rm4lj7ZSr2CGRc5fs empC2CftoGIWCkx+BR4SWu27pRdlRphsGdwggs1LnF5P1hz0z7auO//ODtg+PdQZFUI4Ph PcG98H36XqZ189DaEA88h6CDRNke0oI= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-447-m-vIZTaiPO-3HnJowFEtqw-1; Mon, 13 Mar 2023 12:27:43 -0400 X-MC-Unique: m-vIZTaiPO-3HnJowFEtqw-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 52AC985CBEB; Mon, 13 Mar 2023 16:27:42 +0000 (UTC) Received: from tpad.localdomain (ovpn-112-2.gru2.redhat.com [10.97.112.2]) by smtp.corp.redhat.com (Postfix) with ESMTPS id F0662492C13; Mon, 13 Mar 2023 16:27:41 +0000 (UTC) Received: by tpad.localdomain (Postfix, from userid 1000) id 29EC94038A1F7; Mon, 13 Mar 2023 13:26:44 -0300 (-03) Message-ID: <20230313162634.433875205@redhat.com> User-Agent: quilt/0.67 Date: Mon, 13 Mar 2023 13:25:14 -0300 From: Marcelo Tosatti To: Christoph Lameter Cc: Aaron Tomlin , Frederic Weisbecker , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Russell King , Huacai Chen , Heiko Carstens , x86@kernel.org, Vlastimil Babka , Marcelo Tosatti Subject: [PATCH v5 07/12] mm/vmstat: switch counter modification to cmpxchg References: <20230313162507.032200398@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: D186F80016 X-Stat-Signature: nw7sid7zy9ndi9ard1g58hubyzfo3x4e X-Rspam-User: X-HE-Tag: 1678724867-839205 X-HE-Meta: U2FsdGVkX1+bgm4V/m73gvxouE2/Tt68/Ynffo7ewI9ykSX6N2uT6JlTedFPWJPZFXPmoLN9QlfcUpzzMERhdfVIabuDKo3xWlw3fRwdtb4DEKWkzRFX8EuHifkHFsSy1k7yyQQlJuSnE+QBrY9QRvs65Y22cuQNB1f0iLE/APTnLyEzsgEIcPTHc14DjBc981SKAErEa/ATGDUbwWPl8K/QuWSVEngnStcGgj7vzJK15mqkvTtpsn1y2kD0OoW3QxvIOsrxUB7LMKbnmNFo+Wrc2tJdcAc/ejTuI6Hxtptwe+6Ahk4X5aGKCQ1+krSaP2SA+WRQxgqkRlfgYTAVJtfT6+M7FhUDc/YdZSL07lxwEUSlRdeIRBlJA70stq+aHgrk3HvFLW9/ctGeNu1zcI3cKpOfknmcHL5J/LEPxArsIBhCC7UPz5dnTRwlGIgweudP8GKyuvzuntaYxwnFyzQfrdBBYtIillpK5ZYy7hSWNvnGV2qMc6SLWYz/gpCrxzIi4Zebq9BW2DcfTfcdZTIV69cIbMIsPJwkqIp1IZKFBCbfQM+rBm0F3KfhwCcdqwcfHU/dXHO6MS8x73uvrO9VFEvuOx4nsixVk7JpCBCphwoXTx9SSg1kUnWaezgl0ePFBfyzwXhav3Bhgw3KVRqfQhLZIYM0DXgaJgxM9X9B+VergnGoNd/AE0jdoHK6PvUFCS9RJQ8wlJ/F+qs8WzKKNLURut1hRZbSmxlF8BOjjsYqvsYvOp9aDdd3ifvj+cJuE2jUBwufYgsDorYqOaAS1RYciqjet1uRLMmrfRvnU+BpwX6pQxYsimX7Eo/1d1h7X8qiV1svR+vi4R+bJk5j1fahY6IHxhePYGDJKqesG/JUg+QDpoTlKop1UNZRJsJ98pdheqCr0XZfWeYd0UIbNBuKFcys9sdUsmh2vcNmyfHUYAxaIMW4g9/eoyZPXlGusvQ3C6zZxZsKOgK vlFc4fgZ W0HMmy4o8j1fwrZBpqzqr2dUZEdL8vbIVS9xdovQoKe2J7qYK4VXGk7e4LYSphCDP5LyizBqfy7BS4FVOtDapYKeTlQstw4yzQp5Q4bgZdW4PEDwW39aMlKiPOQ5BpmIiL4rmU5pTMNhBsDXkjiBi8R495vZ2aDVDM7KETWb7dROhJSOsXlPvAT8vEZV3VX7IdoN/E7nQwxn+hFZ6qerFvs7S1+td7EccKODY X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In preparation for switching vmstat shepherd to flush per-CPU counters remotely, switch the __{mod,inc,dec} functions that modify the counters to use cmpxchg. To facilitate reviewing, functions are ordered in the text file, as: __{mod,inc,dec}_{zone,node}_page_state #ifdef CONFIG_HAVE_CMPXCHG_LOCAL {mod,inc,dec}_{zone,node}_page_state #else {mod,inc,dec}_{zone,node}_page_state #endif This patch defines the __ versions for the CONFIG_HAVE_CMPXCHG_LOCAL case to be their non-"__" counterparts: #ifdef CONFIG_HAVE_CMPXCHG_LOCAL {mod,inc,dec}_{zone,node}_page_state __{mod,inc,dec}_{zone,node}_page_state = {mod,inc,dec}_{zone,node}_page_state #else {mod,inc,dec}_{zone,node}_page_state __{mod,inc,dec}_{zone,node}_page_state #endif To test the performance difference, a page allocator microbenchmark: https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/mm/bench/page_bench01.c with loops=1000000 was used, on Intel Core i7-11850H @ 2.50GHz. For the single_page_alloc_free test, which does /** Loop to measure **/ for (i = 0; i < rec->loops; i++) { my_page = alloc_page(gfp_mask); if (unlikely(my_page == NULL)) return 0; __free_page(my_page); } Unit is cycles. Vanilla Patched Diff 115.25 117 1.4% Signed-off-by: Marcelo Tosatti Index: linux-vmstat-remote/mm/vmstat.c =================================================================== --- linux-vmstat-remote.orig/mm/vmstat.c +++ linux-vmstat-remote/mm/vmstat.c @@ -334,6 +334,188 @@ void set_pgdat_percpu_threshold(pg_data_ } } +#ifdef CONFIG_HAVE_CMPXCHG_LOCAL +/* + * If we have cmpxchg_local support then we do not need to incur the overhead + * that comes with local_irq_save/restore if we use this_cpu_cmpxchg. + * + * mod_state() modifies the zone counter state through atomic per cpu + * operations. + * + * Overstep mode specifies how overstep should handled: + * 0 No overstepping + * 1 Overstepping half of threshold + * -1 Overstepping minus half of threshold + */ +static inline void mod_zone_state(struct zone *zone, enum zone_stat_item item, + long delta, int overstep_mode) +{ + struct per_cpu_zonestat __percpu *pcp = zone->per_cpu_zonestats; + s8 __percpu *p = pcp->vm_stat_diff + item; + long o, n, t, z; + + do { + z = 0; /* overflow to zone counters */ + + /* + * The fetching of the stat_threshold is racy. We may apply + * a counter threshold to the wrong the cpu if we get + * rescheduled while executing here. However, the next + * counter update will apply the threshold again and + * therefore bring the counter under the threshold again. + * + * Most of the time the thresholds are the same anyways + * for all cpus in a zone. + */ + t = this_cpu_read(pcp->stat_threshold); + + o = this_cpu_read(*p); + n = delta + o; + + if (abs(n) > t) { + int os = overstep_mode * (t >> 1); + + /* Overflow must be added to zone counters */ + z = n + os; + n = -os; + } + } while (this_cpu_cmpxchg(*p, o, n) != o); + + if (z) + zone_page_state_add(z, zone, item); +} + +void mod_zone_page_state(struct zone *zone, enum zone_stat_item item, + long delta) +{ + mod_zone_state(zone, item, delta, 0); +} +EXPORT_SYMBOL(mod_zone_page_state); + +void __mod_zone_page_state(struct zone *zone, enum zone_stat_item item, + long delta) +{ + mod_zone_state(zone, item, delta, 0); +} +EXPORT_SYMBOL(__mod_zone_page_state); + +void inc_zone_page_state(struct page *page, enum zone_stat_item item) +{ + mod_zone_state(page_zone(page), item, 1, 1); +} +EXPORT_SYMBOL(inc_zone_page_state); + +void __inc_zone_page_state(struct page *page, enum zone_stat_item item) +{ + mod_zone_state(page_zone(page), item, 1, 1); +} +EXPORT_SYMBOL(__inc_zone_page_state); + +void dec_zone_page_state(struct page *page, enum zone_stat_item item) +{ + mod_zone_state(page_zone(page), item, -1, -1); +} +EXPORT_SYMBOL(dec_zone_page_state); + +void __dec_zone_page_state(struct page *page, enum zone_stat_item item) +{ + mod_zone_state(page_zone(page), item, -1, -1); +} +EXPORT_SYMBOL(__dec_zone_page_state); + +static inline void mod_node_state(struct pglist_data *pgdat, + enum node_stat_item item, + int delta, int overstep_mode) +{ + struct per_cpu_nodestat __percpu *pcp = pgdat->per_cpu_nodestats; + s8 __percpu *p = pcp->vm_node_stat_diff + item; + long o, n, t, z; + + if (vmstat_item_in_bytes(item)) { + /* + * Only cgroups use subpage accounting right now; at + * the global level, these items still change in + * multiples of whole pages. Store them as pages + * internally to keep the per-cpu counters compact. + */ + VM_WARN_ON_ONCE(delta & (PAGE_SIZE - 1)); + delta >>= PAGE_SHIFT; + } + + do { + z = 0; /* overflow to node counters */ + + /* + * The fetching of the stat_threshold is racy. We may apply + * a counter threshold to the wrong the cpu if we get + * rescheduled while executing here. However, the next + * counter update will apply the threshold again and + * therefore bring the counter under the threshold again. + * + * Most of the time the thresholds are the same anyways + * for all cpus in a node. + */ + t = this_cpu_read(pcp->stat_threshold); + + o = this_cpu_read(*p); + n = delta + o; + + if (abs(n) > t) { + int os = overstep_mode * (t >> 1); + + /* Overflow must be added to node counters */ + z = n + os; + n = -os; + } + } while (this_cpu_cmpxchg(*p, o, n) != o); + + if (z) + node_page_state_add(z, pgdat, item); +} + +void mod_node_page_state(struct pglist_data *pgdat, enum node_stat_item item, + long delta) +{ + mod_node_state(pgdat, item, delta, 0); +} +EXPORT_SYMBOL(mod_node_page_state); + +void __mod_node_page_state(struct pglist_data *pgdat, enum node_stat_item item, + long delta) +{ + mod_node_state(pgdat, item, delta, 0); +} +EXPORT_SYMBOL(__mod_node_page_state); + +void inc_node_state(struct pglist_data *pgdat, enum node_stat_item item) +{ + mod_node_state(pgdat, item, 1, 1); +} + +void inc_node_page_state(struct page *page, enum node_stat_item item) +{ + mod_node_state(page_pgdat(page), item, 1, 1); +} +EXPORT_SYMBOL(inc_node_page_state); + +void __inc_node_page_state(struct page *page, enum node_stat_item item) +{ + mod_node_state(page_pgdat(page), item, 1, 1); +} +EXPORT_SYMBOL(__inc_node_page_state); + +void dec_node_page_state(struct page *page, enum node_stat_item item) +{ + mod_node_state(page_pgdat(page), item, -1, -1); +} +EXPORT_SYMBOL(dec_node_page_state); + +void __dec_node_page_state(struct page *page, enum node_stat_item item) +{ + mod_node_state(page_pgdat(page), item, -1, -1); +} +EXPORT_SYMBOL(__dec_node_page_state); +#else /* * For use when we know that interrupts are disabled, * or when we know that preemption is disabled and that @@ -541,149 +723,6 @@ void __dec_node_page_state(struct page * } EXPORT_SYMBOL(__dec_node_page_state); -#ifdef CONFIG_HAVE_CMPXCHG_LOCAL -/* - * If we have cmpxchg_local support then we do not need to incur the overhead - * that comes with local_irq_save/restore if we use this_cpu_cmpxchg. - * - * mod_state() modifies the zone counter state through atomic per cpu - * operations. - * - * Overstep mode specifies how overstep should handled: - * 0 No overstepping - * 1 Overstepping half of threshold - * -1 Overstepping minus half of threshold -*/ -static inline void mod_zone_state(struct zone *zone, - enum zone_stat_item item, long delta, int overstep_mode) -{ - struct per_cpu_zonestat __percpu *pcp = zone->per_cpu_zonestats; - s8 __percpu *p = pcp->vm_stat_diff + item; - long o, n, t, z; - - do { - z = 0; /* overflow to zone counters */ - - /* - * The fetching of the stat_threshold is racy. We may apply - * a counter threshold to the wrong the cpu if we get - * rescheduled while executing here. However, the next - * counter update will apply the threshold again and - * therefore bring the counter under the threshold again. - * - * Most of the time the thresholds are the same anyways - * for all cpus in a zone. - */ - t = this_cpu_read(pcp->stat_threshold); - - o = this_cpu_read(*p); - n = delta + o; - - if (abs(n) > t) { - int os = overstep_mode * (t >> 1) ; - - /* Overflow must be added to zone counters */ - z = n + os; - n = -os; - } - } while (this_cpu_cmpxchg(*p, o, n) != o); - - if (z) - zone_page_state_add(z, zone, item); -} - -void mod_zone_page_state(struct zone *zone, enum zone_stat_item item, - long delta) -{ - mod_zone_state(zone, item, delta, 0); -} -EXPORT_SYMBOL(mod_zone_page_state); - -void inc_zone_page_state(struct page *page, enum zone_stat_item item) -{ - mod_zone_state(page_zone(page), item, 1, 1); -} -EXPORT_SYMBOL(inc_zone_page_state); - -void dec_zone_page_state(struct page *page, enum zone_stat_item item) -{ - mod_zone_state(page_zone(page), item, -1, -1); -} -EXPORT_SYMBOL(dec_zone_page_state); - -static inline void mod_node_state(struct pglist_data *pgdat, - enum node_stat_item item, int delta, int overstep_mode) -{ - struct per_cpu_nodestat __percpu *pcp = pgdat->per_cpu_nodestats; - s8 __percpu *p = pcp->vm_node_stat_diff + item; - long o, n, t, z; - - if (vmstat_item_in_bytes(item)) { - /* - * Only cgroups use subpage accounting right now; at - * the global level, these items still change in - * multiples of whole pages. Store them as pages - * internally to keep the per-cpu counters compact. - */ - VM_WARN_ON_ONCE(delta & (PAGE_SIZE - 1)); - delta >>= PAGE_SHIFT; - } - - do { - z = 0; /* overflow to node counters */ - - /* - * The fetching of the stat_threshold is racy. We may apply - * a counter threshold to the wrong the cpu if we get - * rescheduled while executing here. However, the next - * counter update will apply the threshold again and - * therefore bring the counter under the threshold again. - * - * Most of the time the thresholds are the same anyways - * for all cpus in a node. - */ - t = this_cpu_read(pcp->stat_threshold); - - o = this_cpu_read(*p); - n = delta + o; - - if (abs(n) > t) { - int os = overstep_mode * (t >> 1) ; - - /* Overflow must be added to node counters */ - z = n + os; - n = -os; - } - } while (this_cpu_cmpxchg(*p, o, n) != o); - - if (z) - node_page_state_add(z, pgdat, item); -} - -void mod_node_page_state(struct pglist_data *pgdat, enum node_stat_item item, - long delta) -{ - mod_node_state(pgdat, item, delta, 0); -} -EXPORT_SYMBOL(mod_node_page_state); - -void inc_node_state(struct pglist_data *pgdat, enum node_stat_item item) -{ - mod_node_state(pgdat, item, 1, 1); -} - -void inc_node_page_state(struct page *page, enum node_stat_item item) -{ - mod_node_state(page_pgdat(page), item, 1, 1); -} -EXPORT_SYMBOL(inc_node_page_state); - -void dec_node_page_state(struct page *page, enum node_stat_item item) -{ - mod_node_state(page_pgdat(page), item, -1, -1); -} -EXPORT_SYMBOL(dec_node_page_state); -#else /* * Use interrupt disable to serialize counter updates */ Index: linux-vmstat-remote/mm/page_alloc.c =================================================================== --- linux-vmstat-remote.orig/mm/page_alloc.c +++ linux-vmstat-remote/mm/page_alloc.c @@ -8628,9 +8628,6 @@ static int page_alloc_cpu_dead(unsigned /* * Zero the differential counters of the dead processor * so that the vm statistics are consistent. - * - * This is only okay since the processor is dead and cannot - * race with what we are doing. */ cpu_vm_stats_fold(cpu);