From patchwork Thu Apr 10 15:23:27 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frederic Weisbecker X-Patchwork-Id: 14046699 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93C54C3601E for ; Thu, 10 Apr 2025 15:23:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 23B20280114; Thu, 10 Apr 2025 11:23:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1BEB028010C; Thu, 10 Apr 2025 11:23:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 06528280114; Thu, 10 Apr 2025 11:23:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id DAD1C28010C for ; Thu, 10 Apr 2025 11:23:55 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 861CDC0F17 for ; Thu, 10 Apr 2025 15:23:56 +0000 (UTC) X-FDA: 83318504472.25.C845EB5 Received: from tor.source.kernel.org (tor.source.kernel.org [172.105.4.254]) by imf04.hostedemail.com (Postfix) with ESMTP id F382B40007 for ; Thu, 10 Apr 2025 15:23:54 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=RW9KHCid; spf=pass (imf04.hostedemail.com: domain of frederic@kernel.org designates 172.105.4.254 as permitted sender) smtp.mailfrom=frederic@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1744298635; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ctq5WSFDutVixQ5N2GBcMZC/sX/Lnr7WkLgO7NITXl4=; b=xoY3Wsg/r2qx9E3CkR8JCQdUa2kNmWNLEWElgJ5t0MSifZLHMvCzeVy8Z7nbvvm2e0HVyY H399I+oUL6YXbjiRHqrb9S40Jcafw4YMpDMEbL+ZAawR1CjglQNwnkixIFS9aKbeuc3m7M iM4KjqEBn51TzDxLhKqe0Ixes3M0xaA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1744298635; a=rsa-sha256; cv=none; b=R7+5E4OsEtcuN4f7Hg7RVeHoEBPcwlxXc10ra4dplPPRQPvTOVs8A5am+qjPs17Vsag3UK Qc1RZqkBKhz/pc4P7SpjRsWBWN1Du6I3oHRggTWxhNwJY9Mx7qki9vVpiigU44bUAT/Qve 8SIZqtAatRi3chPqIst11OCtbCU4Dpg= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=RW9KHCid; spf=pass (imf04.hostedemail.com: domain of frederic@kernel.org designates 172.105.4.254 as permitted sender) smtp.mailfrom=frederic@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by tor.source.kernel.org (Postfix) with ESMTP id C2AD268449; Thu, 10 Apr 2025 15:23:43 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 09497C4CEDD; Thu, 10 Apr 2025 15:23:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1744298633; bh=mUaotrL5SLp9Dul2a/IS6zLfFa040Gs6YVO08bgNqSk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RW9KHCid5MwDdaD5X2IFz/T1e97biRC12D1QXwfyy+s3NSH1Ro05JZ5YD8Q1ZNW1o 0QSePthhlFiAJuzXn/LHvY1akEowaX2Oqsjb4xYhFfMqZDTqwQAUtaSGTZ2SRfCI4h fILGR7GF0FJzviUX7xNzYfB7XUeroyF5fobplI/6r2NK4LigzCM8fYo7d0eAEnYxS4 UhI6YxhP9u3MNv0jT7iukB/zfpHA4u0QsTaY13s2Z9LW1f3Z5EGRw0ZvSvrPrhJz2A FUHr1+4q/KXsIwH4j7NZ/S2UP+TflYHDZsct17x9X5U5GZ2tzz2ge/zTyl4RTtRvNv 7DcXOyLdgeM2Q== From: Frederic Weisbecker To: LKML Cc: Frederic Weisbecker , Andrew Morton , Ingo Molnar , Marcelo Tosatti , Michal Hocko , Oleg Nesterov , Peter Zijlstra , Thomas Gleixner , Valentin Schneider , Vlastimil Babka , linux-mm@kvack.org Subject: [PATCH 6/6] mm: Drain LRUs upon resume to userspace on nohz_full CPUs Date: Thu, 10 Apr 2025 17:23:27 +0200 Message-ID: <20250410152327.24504-7-frederic@kernel.org> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250410152327.24504-1-frederic@kernel.org> References: <20250410152327.24504-1-frederic@kernel.org> MIME-Version: 1.0 X-Stat-Signature: syw8k4bstgmxyqhpyi8yeu8885uegone X-Rspam-User: X-Rspamd-Queue-Id: F382B40007 X-Rspamd-Server: rspam08 X-HE-Tag: 1744298634-506711 X-HE-Meta: U2FsdGVkX194J65aUl5GqqtbKpgAmumU0tVNt+uXOAudoy8d3hCCsTlAi63YniCYtvGqJVFuhO78PpFYRZ807M4243Mcmciksa4FrqXPGKjI1FdBlA9ebD/QMhkQABHc8wb5FJ8RlbBul2Qma4hYRuYMZii1rsc4dwGF3B9fP87kPz1yqWfkqchP1kwWrz+4vnsl7bnGtwskyTb/JHBkAlUl7BKoeCB6i96nxTYMn8XI2w1Woss1bNggfIWI+NLJDBHden3KmmXYOIQDQJT8rq640qw+8DvACuSLLv28QA66pnZuqQtKq1zrFuqAXhpXEIfxPF2ruproWZny2cdhO4INcPJ2+H8k/kN74iDGLrttTb6oQuK33JvqmFGEACKCOe6UCbz+/VCQ4BuPrZneNEEVr3+MMwHt25DkiX76hbyJBZeZmaVKZYrd9w0P4kcKCJo1CLiR6ldISQTC1hiZr9XrmpVP8DYiRSN8tdQRLHLZLSLKJ1tV5z9I99c34cmTXkWfwHQRfI6+e/Xj7ENYkovd/+DLUzI2yHo/AFc/WY1HTTysem8Pq9IvLGLp2Xj4LUtF7LczsBFynmV1tjNqK8v2DE03O97mGuyFf3a4YflZRXqseK0YDNTE1VnPlpagC63vYWpnjwZyJxlslTJ/CFfjl4ceAbaQCVhCMVaOzCbT2P14GzFUH1lAHazjoNif7sX4P0BYmNNhnW3gFRdPP4p3Eqcx9xJlbudqhKn9EcgdOx7RHkVvvzY2OLPnfDFZJEoNV96P6ivazTgnvZ7Aj5hnOpT+xXCVhMUumzSLnrsLrosOoYOPXb0LZ6HFLCCzrMsYO9xXKKWjrU0rZhJUFiiS5P2eJ8cs36kS5KolgcS2QfSzdHpfeZ5ayfA55AttlHPP1Hrj9GiddJCLvygeZKBXMDb8zvjY8Fr4xDNqJO6jwIAl+cqp3fRsBDNnMd4EWXsYxyTDhN8YwIDkuQQ rZ1ngBc2 layaouHnlzbtwoUyPkQzcLF3InglW1likxam1fsyuE4lWiOJiwpewW26N8ISmtMh8yRf1b8SZFZsdotJ9OhsDN9/ZntYMkTXEYGk8Y/PhmRupxmz8JuVOCTMyEceEir4C3n7GPMiZFZyHwoz4wDUqiu7KO0Fr7cZh1JozUH84MAZkyLoCf/p2yk4RaILj8+7A0siNzPCdlJ1+Op4A004DtbH4PjrbZfE5ssG6Pd4t+297dV/UFNVzcGzOHfgDRY93gjnGjirKSsjb/CawIM0EQ8AKwGxn7rQ9mg16vvcmLznBpSb1iupEp+HCzwC5yibu+6P3 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: LRU batching can be source of disturbances for isolated workloads running in the userspace because it requires kernel worker to handle that and that would preempt the said task. The primary source for such disruption would be __lru_add_drain_all which could be triggered from non-isolated CPUs. Why would an isolated CPU have anything on the pcp cache? Many syscalls allocate pages that might end there. A typical and unavoidable one would be fork/exec leaving pages on the cache behind just waiting for somebody to drain. Address the problem by noting a batch has been added to the cache and schedule draining upon return to userspace so the work is done while the syscall is still executing and there are no suprises while the task runs in the userspace where it doesn't want to be preempted. Signed-off-by: Frederic Weisbecker --- include/linux/pagevec.h | 18 ++---------------- include/linux/swap.h | 1 + kernel/sched/isolation.c | 3 +++ mm/swap.c | 30 +++++++++++++++++++++++++++++- 4 files changed, 35 insertions(+), 17 deletions(-) diff --git a/include/linux/pagevec.h b/include/linux/pagevec.h index 5d3a0cccc6bf..7e647b8df4c7 100644 --- a/include/linux/pagevec.h +++ b/include/linux/pagevec.h @@ -61,22 +61,8 @@ static inline unsigned int folio_batch_space(struct folio_batch *fbatch) return PAGEVEC_SIZE - fbatch->nr; } -/** - * folio_batch_add() - Add a folio to a batch. - * @fbatch: The folio batch. - * @folio: The folio to add. - * - * The folio is added to the end of the batch. - * The batch must have previously been initialised using folio_batch_init(). - * - * Return: The number of slots still available. - */ -static inline unsigned folio_batch_add(struct folio_batch *fbatch, - struct folio *folio) -{ - fbatch->folios[fbatch->nr++] = folio; - return folio_batch_space(fbatch); -} +unsigned int folio_batch_add(struct folio_batch *fbatch, + struct folio *folio); /** * folio_batch_next - Return the next folio to process. diff --git a/include/linux/swap.h b/include/linux/swap.h index db46b25a65ae..8244475c2efe 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -401,6 +401,7 @@ extern void lru_add_drain(void); extern void lru_add_drain_cpu(int cpu); extern void lru_add_drain_cpu_zone(struct zone *zone); extern void lru_add_drain_all(void); +extern void lru_add_and_bh_lrus_drain(void); void folio_deactivate(struct folio *folio); void folio_mark_lazyfree(struct folio *folio); extern void swap_setup(void); diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c index e246287de9fa..553889f4e9be 100644 --- a/kernel/sched/isolation.c +++ b/kernel/sched/isolation.c @@ -8,6 +8,8 @@ * */ +#include + enum hk_flags { HK_FLAG_DOMAIN = BIT(HK_TYPE_DOMAIN), HK_FLAG_MANAGED_IRQ = BIT(HK_TYPE_MANAGED_IRQ), @@ -253,6 +255,7 @@ __setup("isolcpus=", housekeeping_isolcpus_setup); #if defined(CONFIG_NO_HZ_FULL) static void isolated_task_work(struct callback_head *head) { + lru_add_and_bh_lrus_drain(); } int __isolated_task_work_queue(void) diff --git a/mm/swap.c b/mm/swap.c index 77b2d5997873..99a1b7b81e86 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -37,6 +37,7 @@ #include #include #include +#include #include "internal.h" @@ -155,6 +156,29 @@ static void lru_add(struct lruvec *lruvec, struct folio *folio) trace_mm_lru_insertion(folio); } +/** + * folio_batch_add() - Add a folio to a batch. + * @fbatch: The folio batch. + * @folio: The folio to add. + * + * The folio is added to the end of the batch. + * The batch must have previously been initialised using folio_batch_init(). + * + * Return: The number of slots still available. + */ +unsigned int folio_batch_add(struct folio_batch *fbatch, + struct folio *folio) +{ + unsigned int ret; + + fbatch->folios[fbatch->nr++] = folio; + ret = folio_batch_space(fbatch); + isolated_task_work_queue(); + + return ret; +} +EXPORT_SYMBOL(folio_batch_add); + static void folio_batch_move_lru(struct folio_batch *fbatch, move_fn_t move_fn) { int i; @@ -738,7 +762,7 @@ void lru_add_drain(void) * the same cpu. It shouldn't be a problem in !SMP case since * the core is only one and the locks will disable preemption. */ -static void lru_add_and_bh_lrus_drain(void) +void lru_add_and_bh_lrus_drain(void) { local_lock(&cpu_fbatches.lock); lru_add_drain_cpu(smp_processor_id()); @@ -864,6 +888,10 @@ static inline void __lru_add_drain_all(bool force_all_cpus) for_each_online_cpu(cpu) { struct work_struct *work = &per_cpu(lru_add_drain_work, cpu); + /* Isolated CPUs handle their cache upon return to userspace */ + if (!housekeeping_cpu(cpu, HK_TYPE_KERNEL_NOISE)) + continue; + if (cpu_needs_drain(cpu)) { INIT_WORK(work, lru_add_drain_per_cpu); queue_work_on(cpu, mm_percpu_wq, work);