From patchwork Mon Aug 1 23:42:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Tomlin X-Patchwork-Id: 12934170 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E1A0C00144 for ; Mon, 1 Aug 2022 23:43:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CB02F6B0072; Mon, 1 Aug 2022 19:43:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C5F6E6B0073; Mon, 1 Aug 2022 19:43:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A8B408E0001; Mon, 1 Aug 2022 19:43:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 9BD316B0072 for ; Mon, 1 Aug 2022 19:43:06 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 74916A0C4C for ; Mon, 1 Aug 2022 23:43:06 +0000 (UTC) X-FDA: 79752651972.10.E5E96DF Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf14.hostedemail.com (Postfix) with ESMTP id 0CD8A100099 for ; Mon, 1 Aug 2022 23:43:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1659397385; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=IEGgPc6kXu5gJq3u5V+y6qTg7ywEaMnnfFNE/DMiRTs=; b=OiVD4z7DIw6ezd5HdEIMkwtcE55hYRh1JREVdspHmmCByDVsF7Ck7nyGuffgolMZI3NUU0 KttTaZRi7Ij9H7Hq99FNSNWjIVxeqKVIEaFKhiT2VP3IMIjOI96+r1HdhZLcchqM5KjTlO N/nMmlVLZenT4AzWTbEtSZ2zbDhO4W0= Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-194-W05ddyoeP3O__eIBr9Q1vw-1; Mon, 01 Aug 2022 19:43:04 -0400 X-MC-Unique: W05ddyoeP3O__eIBr9Q1vw-1 Received: by mail-wm1-f69.google.com with SMTP id v64-20020a1cac43000000b003a4bea31b4dso3862995wme.3 for ; Mon, 01 Aug 2022 16:43:04 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=IEGgPc6kXu5gJq3u5V+y6qTg7ywEaMnnfFNE/DMiRTs=; b=z+6ovh32M//xuRXqO3T0ekEP+/H1OQuR0FOInV0Uf2apRkZPdW9vUxf+3At9/DU/sT vWeC7mcgUMpom/uTy9JsaxEyJ7+r8oirba6WiWHRO7f/zStRzUsFUa4K0Ex06dCAHx24 z0WZ2Ye7CBTjipD3yRRctTkCFLczSDtZfg5cLMtfadfrHJNBtwKqNzihK0XqyzL/UGOq 2Y8JuKJOEV9Ueu/CLcD4eXeYSNLquaggBS71Me2Bn1yzBH2SlUlRH/sMLKJ+GN11x3vm 21ESEIOuJ2ZYgBtjRzjYSnCwS8k1m71zxaUaXDKBSZ19iudY6AQUHCmmUK41MkzROK3s Q6fQ== X-Gm-Message-State: AJIora/qU/xhN5vRPhd34HMpG9bHS1AIE8uAouXSinp9Oo5IjKpmt/3A knOam4e9kD8I4IR0fCH5OTYZ9k6rafsYZ+ZCrpE6UeaNqN7sR2MQEGGH99d+r7fCuH8B6YvaCCZ IsJDmZQ3u+g== X-Received: by 2002:a05:600c:3485:b0:3a3:71cf:12ca with SMTP id a5-20020a05600c348500b003a371cf12camr12509712wmq.28.1659397383140; Mon, 01 Aug 2022 16:43:03 -0700 (PDT) X-Google-Smtp-Source: AGRyM1tt9VBtSPGh3mC9EcQq1e40LVCdxbvJ4b1WifbOvCdYQfXPSqMKbpgLIRXhRlNA/3/jkJopyg== X-Received: by 2002:a05:600c:3485:b0:3a3:71cf:12ca with SMTP id a5-20020a05600c348500b003a371cf12camr12509701wmq.28.1659397382951; Mon, 01 Aug 2022 16:43:02 -0700 (PDT) Received: from localhost (cpc111743-lutn13-2-0-cust979.9-3.cable.virginm.net. [82.17.115.212]) by smtp.gmail.com with ESMTPSA id i6-20020a05600c480600b003a31f71c5b8sm23376844wmo.27.2022.08.01.16.43.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Aug 2022 16:43:01 -0700 (PDT) From: Aaron Tomlin To: frederic@kernel.org, mtosatti@redhat.com Cc: cl@linux.com, tglx@linutronix.de, mingo@kernel.org, peterz@infradead.org, pauld@redhat.com, neelx@redhat.com, oleksandr@natalenko.name, atomlin@atomlin.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v5 1/2] mm/vmstat: Use per cpu variable to track a vmstat discrepancy Date: Tue, 2 Aug 2022 00:42:57 +0100 Message-Id: <20220801234258.134609-2-atomlin@redhat.com> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220801234258.134609-1-atomlin@redhat.com> References: <20220801234258.134609-1-atomlin@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1659397386; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=IEGgPc6kXu5gJq3u5V+y6qTg7ywEaMnnfFNE/DMiRTs=; b=RhFSEct4j8XGKBEqxwmlVUNxuM4N3lN8trHFTN00E7iDHEQCtRP5CZRysIERzy/DPN0msJ /4HwCNuc95GqbvYgR+fqr9HLFT6Ggs3JzybA/XaizzyWz20rAo6MLmqFj5YKeHDzlbz8hB A68D+VZj9BRDXedba/58zIp+rLwVq6E= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=OiVD4z7D; spf=pass (imf14.hostedemail.com: domain of atomlin@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=atomlin@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1659397386; a=rsa-sha256; cv=none; b=2VLjQk8cx499FgML7LjXJGnjSUleK544xb/9EJOYSDpD5h4uRi4bJUotfndGRonER/Ldt5 8kMqRFdElm03+yab3ZEpGhC3I//A0LEMW3AcAZrbfL/6YLR2K40nnwtKINjRcMfhw4DONv YVq+1rcNjeatsIzixrxJ6/1B/iw2CN0= Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=OiVD4z7D; spf=pass (imf14.hostedemail.com: domain of atomlin@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=atomlin@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 0CD8A100099 X-Stat-Signature: gsms3tghmitn3xtnub74yi9s5fntyihn X-HE-Tag: 1659397385-949221 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This patch incorporates an idea from Marcelo's patch [1] where a CPU-specific variable namely vmstat_dirty is used to indicate if a vmstat imbalance is present for a given CPU. Therefore, at the appropriate time, we can fold all the remaining differentials. [1]: https://lore.kernel.org/lkml/20220204173554.763888172@fedora.localdomain/ Signed-off-by: Aaron Tomlin --- mm/vmstat.c | 46 +++++++++++++++------------------------------- 1 file changed, 15 insertions(+), 31 deletions(-) diff --git a/mm/vmstat.c b/mm/vmstat.c index 373d2730fcf2..51564b7c85fe 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -195,6 +195,12 @@ void fold_vm_numa_events(void) #endif #ifdef CONFIG_SMP +static DEFINE_PER_CPU_ALIGNED(bool, vmstat_dirty); + +static inline void mark_vmstat_dirty(void) +{ + this_cpu_write(vmstat_dirty, true); +} int calculate_pressure_threshold(struct zone *zone) { @@ -367,6 +373,7 @@ void __mod_zone_page_state(struct zone *zone, enum zone_stat_item item, x = 0; } __this_cpu_write(*p, x); + mark_vmstat_dirty(); if (IS_ENABLED(CONFIG_PREEMPT_RT)) preempt_enable(); @@ -405,6 +412,7 @@ void __mod_node_page_state(struct pglist_data *pgdat, enum node_stat_item item, x = 0; } __this_cpu_write(*p, x); + mark_vmstat_dirty(); if (IS_ENABLED(CONFIG_PREEMPT_RT)) preempt_enable(); @@ -603,6 +611,7 @@ static inline void mod_zone_state(struct zone *zone, if (z) zone_page_state_add(z, zone, item); + mark_vmstat_dirty(); } void mod_zone_page_state(struct zone *zone, enum zone_stat_item item, @@ -671,6 +680,7 @@ static inline void mod_node_state(struct pglist_data *pgdat, if (z) node_page_state_add(z, pgdat, item); + mark_vmstat_dirty(); } void mod_node_page_state(struct pglist_data *pgdat, enum node_stat_item item, @@ -1873,6 +1883,7 @@ int sysctl_stat_interval __read_mostly = HZ; static void refresh_vm_stats(struct work_struct *work) { refresh_cpu_vm_stats(true); + this_cpu_write(vmstat_dirty, false); } int vmstat_refresh(struct ctl_table *table, int write, @@ -1937,6 +1948,7 @@ int vmstat_refresh(struct ctl_table *table, int write, static void vmstat_update(struct work_struct *w) { if (refresh_cpu_vm_stats(true)) { + this_cpu_write(vmstat_dirty, false); /* * Counters were updated so we expect more updates * to occur in the future. Keep on running the @@ -1948,35 +1960,6 @@ static void vmstat_update(struct work_struct *w) } } -/* - * Check if the diffs for a certain cpu indicate that - * an update is needed. - */ -static bool need_update(int cpu) -{ - pg_data_t *last_pgdat = NULL; - struct zone *zone; - - for_each_populated_zone(zone) { - struct per_cpu_zonestat *pzstats = per_cpu_ptr(zone->per_cpu_zonestats, cpu); - struct per_cpu_nodestat *n; - - /* - * The fast way of checking if there are any vmstat diffs. - */ - if (memchr_inv(pzstats->vm_stat_diff, 0, sizeof(pzstats->vm_stat_diff))) - return true; - - if (last_pgdat == zone->zone_pgdat) - continue; - last_pgdat = zone->zone_pgdat; - n = per_cpu_ptr(zone->zone_pgdat->per_cpu_nodestats, cpu); - if (memchr_inv(n->vm_node_stat_diff, 0, sizeof(n->vm_node_stat_diff))) - return true; - } - return false; -} - /* * Switch off vmstat processing and then fold all the remaining differentials * until the diffs stay at zero. The function is used by NOHZ and can only be @@ -1990,7 +1973,7 @@ void quiet_vmstat(void) if (!delayed_work_pending(this_cpu_ptr(&vmstat_work))) return; - if (!need_update(smp_processor_id())) + if (!__this_cpu_read(vmstat_dirty)) return; /* @@ -2000,6 +1983,7 @@ void quiet_vmstat(void) * vmstat_shepherd will take care about that for us. */ refresh_cpu_vm_stats(false); + __this_cpu_write(vmstat_dirty, false); } /* @@ -2021,7 +2005,7 @@ static void vmstat_shepherd(struct work_struct *w) for_each_online_cpu(cpu) { struct delayed_work *dw = &per_cpu(vmstat_work, cpu); - if (!delayed_work_pending(dw) && need_update(cpu)) + if (!delayed_work_pending(dw) && per_cpu(vmstat_dirty, cpu)) queue_delayed_work_on(cpu, mm_percpu_wq, dw, 0); cond_resched();