From patchwork Thu Mar 30 19:17:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yosry Ahmed X-Patchwork-Id: 13194755 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3543CC77B62 for ; Thu, 30 Mar 2023 19:18:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D4BD6280002; Thu, 30 Mar 2023 15:18:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C8690280001; Thu, 30 Mar 2023 15:18:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AD94A280002; Thu, 30 Mar 2023 15:18:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 85EC8280001 for ; Thu, 30 Mar 2023 15:18:15 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 49BCB81172 for ; Thu, 30 Mar 2023 19:18:15 +0000 (UTC) X-FDA: 80626525350.12.E20BB27 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) by imf25.hostedemail.com (Postfix) with ESMTP id 871E5A0018 for ; Thu, 30 Mar 2023 19:18:13 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=nHw0e79h; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf25.hostedemail.com: domain of 3dOAlZAoKCPMtjnmtVchZYbjjbgZ.Xjhgdips-hhfqVXf.jmb@flex--yosryahmed.bounces.google.com designates 209.85.215.201 as permitted sender) smtp.mailfrom=3dOAlZAoKCPMtjnmtVchZYbjjbgZ.Xjhgdips-hhfqVXf.jmb@flex--yosryahmed.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1680203893; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=4K13XuzFhLt0ZIgNSFirN/NTpinGGuhSsjDOtu88ZFQ=; b=MaijTmFqJlEm2cL+Di5RKFX2pOg7CwhT9reB0dRvMDpgapRfryhX7SAE85B0LVfxc8A+1X 1qpMwLctL7umR+Q74P3xZbXKAwqGUKh3jkVDx40RwlnA6Rd2WgL5rWz//KLl95wB2RD92E 7VE6U6/ryHzve2Y/iJ+71LBvcf57zyc= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=nHw0e79h; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf25.hostedemail.com: domain of 3dOAlZAoKCPMtjnmtVchZYbjjbgZ.Xjhgdips-hhfqVXf.jmb@flex--yosryahmed.bounces.google.com designates 209.85.215.201 as permitted sender) smtp.mailfrom=3dOAlZAoKCPMtjnmtVchZYbjjbgZ.Xjhgdips-hhfqVXf.jmb@flex--yosryahmed.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1680203893; a=rsa-sha256; cv=none; b=OtHcAgf+DxMcpMBKBAymtpdRCxNrfbZVndeOP7BXH9D3QqgdNIv4O1K1sz7b9PIaRytJTv TH0dj9b/resZx2SF4/DGCIt3L4e9Yhc4cbDf13SxdgGmLf7mcQNejDMATG3+KEPFHTieMH zoVZKPJ+uByKQ9MPnQ0DWDKCxmXXDqM= Received: by mail-pg1-f201.google.com with SMTP id 9-20020a630009000000b0051393797707so1577937pga.5 for ; Thu, 30 Mar 2023 12:18:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1680203893; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=4K13XuzFhLt0ZIgNSFirN/NTpinGGuhSsjDOtu88ZFQ=; b=nHw0e79hZFaw3nf4LyIXmmhTYfA1FbBbf2dimkl5At5Psg3CjvNgu6oG6xad78uoN/ S2j2UYV8EipalJrwt2zNvbI6g73PyAp4x9X0MELejGCTtg493QkbzRVplqF8mo+gd+ZT v2g6y/Uq/dc/IcDqjVuG6qTAXC3lltOSedgQENPqcdYx34VNoyr/anJbPmkUtw6k7KGZ kbHnvz3Vd5/Dm/bxM/NXhZiy/XES3eJpoxargQYJd3pDWMStv8K0/vXl6hkWv2MnHLzC WyPbqSXfAwOToFWCyu1C3jP4ndGUE9RCrti/toBLaYuvsaopTHlhvDYChxTi2n0kxppd O1Nw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680203893; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=4K13XuzFhLt0ZIgNSFirN/NTpinGGuhSsjDOtu88ZFQ=; b=HSu62o+BDC0W8mY+QTegfHWpBAITYrdfKmDNphFVXvAaTfOXA1ui8CssCoZGJvIBJ2 BePjNeRQqPiKkXPWd89HQlq2yiohxC6vxF4NOTsk9rhi8SgJsCIRUuAAvHwpMEdiQM+0 ILAfaLcn29Tj5YPS9etvRH1L0RCUeZavT02YZ+CjMQxu8fWEc7iv+DSP1wXtRaarxXxW CenQg4lgV51dCyj36SvP94gHJH9QV4pPjbvdZyHhZ22KqD3tfa+WWQuMeSSjW6CUWuEX 9sdXo1/uiG7LG3aSSVSOHhREB44XZiDc7+LO+/QyEqIMF9XKHABBPlUtCOKHHpsZWvhY 18Rg== X-Gm-Message-State: AAQBX9cARrEPodUZ616uqNFHU/T2y4iWEKpmpomvGcG7iJuDe42Uoo/4 B0gFIvYeSqpNmuhKBPEXlD8l8Z1aByFCN5L5 X-Google-Smtp-Source: AKy350YSzgQBtWYiTKsbkJ1oQdlD62/Xn6W7mjRdu2TN+RxVPyaHwSjuEqzrgKulPryFVMSvh1+rAuiwX+ctKyIl X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2327]) (user=yosryahmed job=sendgmr) by 2002:a63:1914:0:b0:50b:de95:af77 with SMTP id z20-20020a631914000000b0050bde95af77mr2125406pgl.1.1680203892860; Thu, 30 Mar 2023 12:18:12 -0700 (PDT) Date: Thu, 30 Mar 2023 19:17:58 +0000 In-Reply-To: <20230330191801.1967435-1-yosryahmed@google.com> Mime-Version: 1.0 References: <20230330191801.1967435-1-yosryahmed@google.com> X-Mailer: git-send-email 2.40.0.348.gf938b09366-goog Message-ID: <20230330191801.1967435-6-yosryahmed@google.com> Subject: [PATCH v3 5/8] memcg: sleep during flushing stats in safe contexts From: Yosry Ahmed To: Tejun Heo , Josef Bacik , Jens Axboe , Zefan Li , Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton , " =?utf-8?q?Michal_Koutn=C3=BD?= " Cc: Vasily Averin , cgroups@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, bpf@vger.kernel.org, Yosry Ahmed , Michal Hocko X-Rspamd-Queue-Id: 871E5A0018 X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: ffc6nn4mz5dohjokf89jxaukjmxdem4x X-HE-Tag: 1680203893-181529 X-HE-Meta: U2FsdGVkX18VsG1vJIEmlOniY0YK0H/f6d58Aw3Dl/PpscCMkHYxsg03BT/0bJZbHeMqViooHuLvlK3+OLMDViFF6IuMjJQn2EvJ6qEZNBqGWVQzNFQjWHgTsKS2y1wyd7vGYK9DtPSVYHCA9gacqysgLyOvov435eWrGfds4o+fnEc+5yEYK3a2u5k5d57zI4Ix2yiw2bVD5tYgyN62nuD/IvM587zLnLDLlPjoyjrMTFnwnmUPRtv5RQ+5SJHK4MIewKkETj2ALAFmadn+Wt/DrORbi7zc+VJr1inqijEghN1Q0u7Irge56xjcf0IfYpWiidFdPjuW2pXz9wSAwqlS9R/6X8Qxlp9KEbjwlndsDzmojuVzU8eyKlhak083SGDnpu+TJwl+6JyU9zA7C1pgtr+bn5CrEnnJWGz8ZQ0Aif9lpM30swsEw9Py0q2CPbSiJCIqAg8XiHFWig7QdtdtD9ejg0nyQ3UXd3MBEhwVpzAjOglyvB3bMtH/WaV+p6aHFgMotaA5P1BQqtCSD21Rh/M6YdhUznAeBGb9THMC7Y+i9bCwNxlRBdHWL0+wTO8v4qhQoaOe1a6AbO+g7dyeK81MU9rJBk5jxamBJmtsDiTevTkq0SFzd08Anndl7KcBN+ROCbhexQRf7S2pEf9HHU5nCXW2+kx61ip568MT2bgZeIz8+UB8Kenc+BXTGb2quctLkQR7BCrr5EJTPuXOCpz7i64a2l0v8d9dP/vEn1U8+aa/FahSH1gmAVX4ZmpoiFnftH0s4xONR2/w+W6LI3KHGbjzsTuKM3a498KtGIZnF558EORRBZfKyBKOej+PF2jDde77kvjQfONyHQKAr8Rx+4iF1n/1V74eVuPXqhz+7dkCmS+VlyZFXX5wt6CdIfjhS9+hMFJPbtKwIBWOxcs7a2ZBu70jUVpgWNBV6/mEq1e/uL6gAqXyErnJVBH8/6ikM/gR8JSKh2k XPYnox3H 48iLneYrsNTOEiFixZneKm6Ceca4vQnAlf+NIrH7CgK8PMkvkwfrrc1koP84H0HowlKW4WqkcKG2GR295oR9J6223+XQack3ZtR/ms17Cd04vuCYmDEt7WNG1iupb1POyVpNYuIRLHj0A0Fm8HiY7A7dD6d92T9RA2btOGHTcpoYyp5Q38x6ztH8lMuRnYPFQi653Zic7uGx7bc3FjUSqf773CnhNI0xqQUME1YmcyJ1+nl0gZxQw92CCOKsjHEHfGE+tJ9gokNU9+AYDqVvPc5i8LeVMepWWEBDi+AAkBhPep7Sdbg2g5GhD54V5o7Ym6V0oFbXiMM3ZOrTGCTFP6vySMkbeSEtUOHCZTGmuw/wTiRJgc2WJ6hU/+vCgZEBYJaf8L6/Gb31yJNdIYYfdf99KuX3etB3oe89Tv4Z1FdqB73pYIAlRw5hFamPr6cGWRoebi399LTA2QAaqUxQ/GkqLujL89UDN3hVXtTSzTG+GreGKAbTmUOFcI0RAjyF64rQbpQtPEpOs0hubYasjfa5/kOhTEd2C6BH+GF4Q7afMcrfbHDveQC6sw7USzgEG5zob3H3jDXmei26YZrKjhPVfI32UW//aP8dD1JkvqjqxFa7eENqrKdKE4/7fAIQPxkz4+7YLVcNNJ0dhUDNyAx/ZQxO32cfPpKFCiK2xgmxwQK8aFY/xuc10kjVttACByg9LT8cLDHYlXm18auefg0/5PA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently, all contexts that flush memcg stats do so with sleeping not allowed. Some of these contexts are perfectly safe to sleep in, such as reading cgroup files from userspace or the background periodic flusher. Flushing is an expensive operation that scales with the number of cpus and the number of cgroups in the system, so avoid doing it atomically where possible. Refactor the code to make mem_cgroup_flush_stats() non-atomic (aka sleepable), and provide a separate atomic version. The atomic version is used in reclaim, refault, writeback, and in mem_cgroup_usage(). All other code paths are left to use the non-atomic version. This includes callbacks for userspace reads and the periodic flusher. Since refault is the only caller of mem_cgroup_flush_stats_ratelimited(), change it to mem_cgroup_flush_stats_atomic_ratelimited(). Reclaim and refault code paths are modified to do non-atomic flushing in separate later patches -- so it will eventually be changed back to mem_cgroup_flush_stats_ratelimited(). Signed-off-by: Yosry Ahmed Acked-by: Shakeel Butt Acked-by: Michal Hocko Acked-by: Johannes Weiner Acked-by: Johannes Weiner --- include/linux/memcontrol.h | 9 ++++++-- mm/memcontrol.c | 45 ++++++++++++++++++++++++++++++-------- mm/vmscan.c | 2 +- mm/workingset.c | 2 +- 4 files changed, 45 insertions(+), 13 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index ac3f3b3a45e2..b424ba3ebd09 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1037,7 +1037,8 @@ static inline unsigned long lruvec_page_state_local(struct lruvec *lruvec, } void mem_cgroup_flush_stats(void); -void mem_cgroup_flush_stats_ratelimited(void); +void mem_cgroup_flush_stats_atomic(void); +void mem_cgroup_flush_stats_atomic_ratelimited(void); void __mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, int val); @@ -1535,7 +1536,11 @@ static inline void mem_cgroup_flush_stats(void) { } -static inline void mem_cgroup_flush_stats_ratelimited(void) +static inline void mem_cgroup_flush_stats_atomic(void) +{ +} + +static inline void mem_cgroup_flush_stats_atomic_ratelimited(void) { } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 65750f8b8259..a2ce3aa10d94 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -634,7 +634,7 @@ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val) } } -static void __mem_cgroup_flush_stats(void) +static void do_flush_stats(bool atomic) { /* * We always flush the entire tree, so concurrent flushers can just @@ -646,26 +646,46 @@ static void __mem_cgroup_flush_stats(void) return; WRITE_ONCE(flush_next_time, jiffies_64 + 2*FLUSH_TIME); - cgroup_rstat_flush_atomic(root_mem_cgroup->css.cgroup); + + if (atomic) + cgroup_rstat_flush_atomic(root_mem_cgroup->css.cgroup); + else + cgroup_rstat_flush(root_mem_cgroup->css.cgroup); + atomic_set(&stats_flush_threshold, 0); atomic_set(&stats_flush_ongoing, 0); } +static bool should_flush_stats(void) +{ + return atomic_read(&stats_flush_threshold) > num_online_cpus(); +} + void mem_cgroup_flush_stats(void) { - if (atomic_read(&stats_flush_threshold) > num_online_cpus()) - __mem_cgroup_flush_stats(); + if (should_flush_stats()) + do_flush_stats(false); } -void mem_cgroup_flush_stats_ratelimited(void) +void mem_cgroup_flush_stats_atomic(void) +{ + if (should_flush_stats()) + do_flush_stats(true); +} + +void mem_cgroup_flush_stats_atomic_ratelimited(void) { if (time_after64(jiffies_64, READ_ONCE(flush_next_time))) - mem_cgroup_flush_stats(); + mem_cgroup_flush_stats_atomic(); } static void flush_memcg_stats_dwork(struct work_struct *w) { - __mem_cgroup_flush_stats(); + /* + * Always flush here so that flushing in latency-sensitive paths is + * as cheap as possible. + */ + do_flush_stats(false); queue_delayed_work(system_unbound_wq, &stats_flush_dwork, FLUSH_TIME); } @@ -3685,9 +3705,12 @@ static unsigned long mem_cgroup_usage(struct mem_cgroup *memcg, bool swap) * done from irq context; use stale stats in this case. * Arguably, usage threshold events are not reliable on the root * memcg anyway since its usage is ill-defined. + * + * Additionally, other call paths through memcg_check_events() + * disable irqs, so make sure we are flushing stats atomically. */ if (in_task()) - mem_cgroup_flush_stats(); + mem_cgroup_flush_stats_atomic(); val = memcg_page_state(memcg, NR_FILE_PAGES) + memcg_page_state(memcg, NR_ANON_MAPPED); if (swap) @@ -4610,7 +4633,11 @@ void mem_cgroup_wb_stats(struct bdi_writeback *wb, unsigned long *pfilepages, struct mem_cgroup *memcg = mem_cgroup_from_css(wb->memcg_css); struct mem_cgroup *parent; - mem_cgroup_flush_stats(); + /* + * wb_writeback() takes a spinlock and calls + * wb_over_bg_thresh()->mem_cgroup_wb_stats(). Do not sleep. + */ + mem_cgroup_flush_stats_atomic(); *pdirty = memcg_page_state(memcg, NR_FILE_DIRTY); *pwriteback = memcg_page_state(memcg, NR_WRITEBACK); diff --git a/mm/vmscan.c b/mm/vmscan.c index 9c1c5e8b24b8..a9511ccb936f 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2845,7 +2845,7 @@ static void prepare_scan_count(pg_data_t *pgdat, struct scan_control *sc) * Flush the memory cgroup stats, so that we read accurate per-memcg * lruvec stats for heuristics. */ - mem_cgroup_flush_stats(); + mem_cgroup_flush_stats_atomic(); /* * Determine the scan balance between anon and file LRUs. diff --git a/mm/workingset.c b/mm/workingset.c index af862c6738c3..dab0c362b9e3 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -462,7 +462,7 @@ void workingset_refault(struct folio *folio, void *shadow) mod_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + file, nr); - mem_cgroup_flush_stats_ratelimited(); + mem_cgroup_flush_stats_atomic_ratelimited(); /* * Compare the distance to the existing workingset size. We * don't activate pages that couldn't stay resident even if