From patchwork Tue Mar 11 06:28:45 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gabriele Monaco X-Patchwork-Id: 14011194 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6342AC28B2E for ; Tue, 11 Mar 2025 06:29:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C13FA280002; Tue, 11 Mar 2025 02:29:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BC28D280001; Tue, 11 Mar 2025 02:29:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A8A5E280002; Tue, 11 Mar 2025 02:29:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 8C5DA280001 for ; Tue, 11 Mar 2025 02:29:19 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id E57A9578B3 for ; Tue, 11 Mar 2025 06:29:20 +0000 (UTC) X-FDA: 83208293280.30.1121CBA Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf09.hostedemail.com (Postfix) with ESMTP id 047A1140003 for ; Tue, 11 Mar 2025 06:29:18 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=ehVsPRR+; spf=pass (imf09.hostedemail.com: domain of gmonaco@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=gmonaco@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1741674559; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=fTRdZwgPrtCXWJGQ4j8+JNzqvFNwxEEdlACLEBVW/uY=; b=pR1Gw/DK/PtTGCukaJb4NHcw1vGMbWyL18B5yUAE6tH4Ci5uZBnoAS3eWsJcyTl1f0lgkC 6r2XVGLNyYR/poUtVh4Ka9lcFqMgLn6cN+W9Twy7tHTx9VOyJp4/mpsmvmHQ+mQ62mUzum S90klw/VgQHIJN9Wag2xs9D7ejPb2yA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1741674559; a=rsa-sha256; cv=none; b=yFVA7dHhVxXZoLtnWWa2uyqh5q2Gc7hPmbMgnGxVULzCiQUanqHXaQqOpHKFkv3iuoYJ8f k0/exvk7nhCa8x04zOuPJwK6WNa8CrIEY7gEwC9/ZI3FCUUKhNwTb277xg82XNW/LaR34b rW3E+fkoYbcEri10/UmC4/i8ztE/72c= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=ehVsPRR+; spf=pass (imf09.hostedemail.com: domain of gmonaco@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=gmonaco@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1741674558; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fTRdZwgPrtCXWJGQ4j8+JNzqvFNwxEEdlACLEBVW/uY=; b=ehVsPRR+l8fxTC5w90vPLwvmr2Jj43szwdDZFg2F6qR4UU9x2AzNkmT6LMfi/68RErF+Y+ JbIeZ1TpYccvEPkmNHh+ajRwJr/2hHNpSZHCXkgSw5zQMaJeJ4NOnNjqDe9Wsvu5o+DqVL qQ5de5hclB0qHO29vQcm0uFwmtbUTAA= Received: from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-558-XsEMufr7Mam-fCMVapR0QQ-1; Tue, 11 Mar 2025 02:29:16 -0400 X-MC-Unique: XsEMufr7Mam-fCMVapR0QQ-1 X-Mimecast-MFC-AGG-ID: XsEMufr7Mam-fCMVapR0QQ_1741674555 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 613A91800258; Tue, 11 Mar 2025 06:29:14 +0000 (UTC) Received: from gmonaco-thinkpadt14gen3.rmtit.com (unknown [10.45.224.34]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 3446130001A2; Tue, 11 Mar 2025 06:29:09 +0000 (UTC) From: Gabriele Monaco To: linux-kernel@vger.kernel.org, Andrew Morton , Ingo Molnar , Peter Zijlstra , Mathieu Desnoyers , "Paul E. McKenney" , linux-mm@kvack.org Cc: Gabriele Monaco , Ingo Molnar , Shuah Khan Subject: [PATCH v12 2/3] sched: Move task_mm_cid_work to mm work_struct Date: Tue, 11 Mar 2025 07:28:45 +0100 Message-ID: <20250311062849.72083-3-gmonaco@redhat.com> In-Reply-To: <20250311062849.72083-1-gmonaco@redhat.com> References: <20250311062849.72083-1-gmonaco@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 X-Rspam-User: X-Rspamd-Queue-Id: 047A1140003 X-Stat-Signature: n5q7exw3zs889arz14ue91wh9x37oy97 X-Rspamd-Server: rspam10 X-HE-Tag: 1741674558-884889 X-HE-Meta: U2FsdGVkX187JVLunppTk2QG4JT100peNN2Ln042HUtiKp8gh3d9bJaJaeejmJH6+0bjtbcg7twpI6e/kT63S+K3a0c44MSRWpdkgK0iJq8b//DKbJoNok0osLywVqYurudYnNhRVYMaMujvNMbm8sD3It1rAb7lU4/AQbCR2sdbZsN7TZn4LxSmtelY13VzGpEfJeZ9Hkunk4/ApXsxPPOWEuCuxDAfMWcF0BMWHJwKsc8DNrQp8wDZ6DWpmtSALI4Qjx0qK1kNJDUlQdV+A6Trh/7pfaDKInFZzWntXClaM/kJCvSwWHZvvrWfJU4LUY9nOZpmSp3IlgFUZICsDtSSCELVnFSxHVR8e/CTZpgcIYUDlUc8VFbOEmqpJU7VaHEdsxCrRtPy+U3tHLEK12sSRFABqEe5bGpRONFklY+XRFV5pMiyzMXkHzOQwKWfHJPuBOHJb7UlFPaEfrEUnRCQ4ztO4WQA7vmI3E2a3L/d2semXo0cAXpB/ykZophUq8/A4/EslcWykhz9M1+80j0VvvJPVfikLng5MN9nIrHkh+AUKiR+tWwQtsH9d2in+x5NxRJfI/71XepAOPMXAYo0gEe6SxNlcI5M/6N9ZM60LIW3Ni0DsjzDhpZcYwM+fr7qTIkU8YO/uClCFOzBwhToJh2vntSIZyPa6J+gkpsA18JsZ+ecUF9/8t+ywSgLjIvEuQcTmz+pzKok1b6n8lkHM4wuncI3dxcEoeYUtfMcx3AqiiFbai5/99j49k5K5S+yHgFIDyZOVKrC12G+AW3T7VsY2dFCpLH4U+l1yof1gXHZ0JY783d6R4Db9OcGqQG12CP8WdinfTnnEGAVOu46nbdnzj73i//jOHJaAm5Az64rNdPaAXfgmCcuYqOj0BvQzJ7lPNd7yBLogvNxBzTRqksGWqjYgdWfd73v24jy0wOodU1ENg0SvQ16QTs945jjy4O+pNVLqcGnriU r5jOa+hm zVp67Y6nUNAZH4B1vWy39hX7Z+PeGfxt/WvF6ySOB4YELAtIdxAzxQT3lqEK3wkaZll7+vScradXGXL1/tQojjPF96ct2fHXVB9d6JaGAUPb8NKlwk6hvQHORSrhjZSglUCDVyCc5O70zJyyqB/aAWwWfySqqbGg2CDIg5YP7ZmYuxnXJBssyOhmUYO5LwAjakA+fRH5N0JVhDwo4CZYNB61B4QOufQC9HM0szdHC6Lg+A60T02n86XgIvYiwFw1mMLua/Q79aDhWq0720ErGLBtF5AeS7kpz5Le/Dy67pNEaT4G5kUO2OtOY5ktvnA489E8p9vMW2JD4je8lCHLBU2x7pawvpWjItcOuYj841rbssrbuKoYTk1Luj/BHLn/3Fx7GmfGg1vixEc3R7vcJ355615Sykb+WziJnW97yr0dIGK0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently, the task_mm_cid_work function is called in a task work triggered by a scheduler tick to frequently compact the mm_cids of each process. This can delay the execution of the corresponding thread for the entire duration of the function, negatively affecting the response in case of real time tasks. In practice, we observe task_mm_cid_work increasing the latency of 30-35us on a 128 cores system, this order of magnitude is meaningful under PREEMPT_RT. Run the task_mm_cid_work in a new work_struct connected to the mm_struct rather than in the task context before returning to userspace. This work_struct is initialised with the mm and disabled before freeing it. The queuing of the work happens while returning to userspace in __rseq_handle_notify_resume, maintaining the checks to avoid running more frequently than MM_CID_SCAN_DELAY. To make sure this happens predictably also on long running tasks, we trigger a call to __rseq_handle_notify_resume also from the scheduler tick if the runtime exceeded a 100ms threshold. The main advantage of this change is that the function can be offloaded to a different CPU and even preempted by RT tasks. Moreover, this new behaviour is more predictable with periodic tasks with short runtime, which may rarely run during a scheduler tick. Now, the work is always scheduled when the task returns to userspace. The work is disabled during mmdrop, since the function cannot sleep in all kernel configurations, we cannot wait for possibly running work items to terminate. We make sure the mm is valid in case the task is terminating by reserving it with mmgrab/mmdrop, returning prematurely if we are really the last user while the work gets to run. This situation is unlikely since we don't schedule the work for exiting tasks, but we cannot rule it out. Fixes: 223baf9d17f2 ("sched: Fix performance regression introduced by mm_cid") Reviewed-by: Mathieu Desnoyers Signed-off-by: Gabriele Monaco --- include/linux/mm_types.h | 17 ++++++++++++++++ include/linux/rseq.h | 13 ++++++++++++ include/linux/sched.h | 7 ++++++- kernel/rseq.c | 2 ++ kernel/sched/core.c | 43 ++++++++++++++-------------------------- kernel/sched/sched.h | 2 -- 6 files changed, 53 insertions(+), 31 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 0234f14f2aa6b..c79f468337fc0 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -889,6 +889,10 @@ struct mm_struct { * mm nr_cpus_allowed updates. */ raw_spinlock_t cpus_allowed_lock; + /* + * @cid_work: Work item to run the mm_cid scan. + */ + struct work_struct cid_work; #endif #ifdef CONFIG_MMU atomic_long_t pgtables_bytes; /* size of all page tables */ @@ -1185,6 +1189,8 @@ enum mm_cid_state { MM_CID_LAZY_PUT = (1U << 31), }; +extern void task_mm_cid_work(struct work_struct *work); + static inline bool mm_cid_is_unset(int cid) { return cid == MM_CID_UNSET; @@ -1257,12 +1263,14 @@ static inline int mm_alloc_cid_noprof(struct mm_struct *mm, struct task_struct * if (!mm->pcpu_cid) return -ENOMEM; mm_init_cid(mm, p); + INIT_WORK(&mm->cid_work, task_mm_cid_work); return 0; } #define mm_alloc_cid(...) alloc_hooks(mm_alloc_cid_noprof(__VA_ARGS__)) static inline void mm_destroy_cid(struct mm_struct *mm) { + disable_work(&mm->cid_work); free_percpu(mm->pcpu_cid); mm->pcpu_cid = NULL; } @@ -1284,6 +1292,11 @@ static inline void mm_set_cpus_allowed(struct mm_struct *mm, const struct cpumas WRITE_ONCE(mm->nr_cpus_allowed, cpumask_weight(mm_allowed)); raw_spin_unlock(&mm->cpus_allowed_lock); } + +static inline bool mm_cid_needs_scan(struct mm_struct *mm) +{ + return mm && !time_before(jiffies, READ_ONCE(mm->mm_cid_next_scan)); +} #else /* CONFIG_SCHED_MM_CID */ static inline void mm_init_cid(struct mm_struct *mm, struct task_struct *p) { } static inline int mm_alloc_cid(struct mm_struct *mm, struct task_struct *p) { return 0; } @@ -1294,6 +1307,10 @@ static inline unsigned int mm_cid_size(void) return 0; } static inline void mm_set_cpus_allowed(struct mm_struct *mm, const struct cpumask *cpumask) { } +static inline bool mm_cid_needs_scan(struct mm_struct *mm) +{ + return false; +} #endif /* CONFIG_SCHED_MM_CID */ struct mmu_gather; diff --git a/include/linux/rseq.h b/include/linux/rseq.h index bc8af3eb55987..d20fd72f4c80d 100644 --- a/include/linux/rseq.h +++ b/include/linux/rseq.h @@ -7,6 +7,8 @@ #include #include +#define RSEQ_UNPREEMPTED_THRESHOLD (100ULL * 1000000) /* 100ms */ + /* * Map the event mask on the user-space ABI enum rseq_cs_flags * for direct mask checks. @@ -54,6 +56,14 @@ static inline void rseq_preempt(struct task_struct *t) rseq_set_notify_resume(t); } +static inline void rseq_preempt_from_tick(struct task_struct *t) +{ + u64 rtime = t->se.sum_exec_runtime - t->se.prev_sum_exec_runtime; + + if (rtime > RSEQ_UNPREEMPTED_THRESHOLD) + rseq_preempt(t); +} + /* rseq_migrate() requires preemption to be disabled. */ static inline void rseq_migrate(struct task_struct *t) { @@ -104,6 +114,9 @@ static inline void rseq_signal_deliver(struct ksignal *ksig, static inline void rseq_preempt(struct task_struct *t) { } +static inline void rseq_preempt_from_tick(struct task_struct *t) +{ +} static inline void rseq_migrate(struct task_struct *t) { } diff --git a/include/linux/sched.h b/include/linux/sched.h index 9c15365a30c08..a40bb0b38d2e7 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1397,7 +1397,6 @@ struct task_struct { int last_mm_cid; /* Most recent cid in mm */ int migrate_from_cpu; int mm_cid_active; /* Whether cid bitmap is active */ - struct callback_head cid_work; #endif struct tlbflush_unmap_batch tlb_ubc; @@ -2254,4 +2253,10 @@ static __always_inline void alloc_tag_restore(struct alloc_tag *tag, struct allo #define alloc_tag_restore(_tag, _old) do {} while (0) #endif +#ifdef CONFIG_SCHED_MM_CID +extern void task_queue_mm_cid(struct task_struct *curr); +#else +static inline void task_queue_mm_cid(struct task_struct *curr) { } +#endif + #endif diff --git a/kernel/rseq.c b/kernel/rseq.c index 2cb16091ec0ae..909547ec52fd6 100644 --- a/kernel/rseq.c +++ b/kernel/rseq.c @@ -419,6 +419,8 @@ void __rseq_handle_notify_resume(struct ksignal *ksig, struct pt_regs *regs) } if (unlikely(rseq_update_cpu_node_id(t))) goto error; + if (mm_cid_needs_scan(t->mm)) + task_queue_mm_cid(t); return; error: diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 67189907214d3..f42b6f2d06b95 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -5663,7 +5663,7 @@ void sched_tick(void) resched_latency = cpu_resched_latency(rq); calc_global_load_tick(rq); sched_core_tick(rq); - task_tick_mm_cid(rq, donor); + rseq_preempt_from_tick(donor); scx_tick(rq); rq_unlock(rq, &rf); @@ -10530,22 +10530,16 @@ static void sched_mm_cid_remote_clear_weight(struct mm_struct *mm, int cpu, sched_mm_cid_remote_clear(mm, pcpu_cid, cpu); } -static void task_mm_cid_work(struct callback_head *work) +void task_mm_cid_work(struct work_struct *work) { unsigned long now = jiffies, old_scan, next_scan; - struct task_struct *t = current; struct cpumask *cidmask; - struct mm_struct *mm; + struct mm_struct *mm = container_of(work, struct mm_struct, cid_work); int weight, cpu; - SCHED_WARN_ON(t != container_of(work, struct task_struct, cid_work)); - - work->next = work; /* Prevent double-add */ - if (t->flags & PF_EXITING) - return; - mm = t->mm; - if (!mm) - return; + /* We are the last user, process already terminated. */ + if (atomic_read(&mm->mm_count) == 1) + goto out_drop; old_scan = READ_ONCE(mm->mm_cid_next_scan); next_scan = now + msecs_to_jiffies(MM_CID_SCAN_DELAY); if (!old_scan) { @@ -10558,9 +10552,9 @@ static void task_mm_cid_work(struct callback_head *work) old_scan = next_scan; } if (time_before(now, old_scan)) - return; + goto out_drop; if (!try_cmpxchg(&mm->mm_cid_next_scan, &old_scan, next_scan)) - return; + goto out_drop; cidmask = mm_cidmask(mm); /* Clear cids that were not recently used. */ for_each_possible_cpu(cpu) @@ -10572,6 +10566,8 @@ static void task_mm_cid_work(struct callback_head *work) */ for_each_possible_cpu(cpu) sched_mm_cid_remote_clear_weight(mm, cpu, weight); +out_drop: + mmdrop(mm); } void init_sched_mm_cid(struct task_struct *t) @@ -10584,23 +10580,14 @@ void init_sched_mm_cid(struct task_struct *t) if (mm_users == 1) mm->mm_cid_next_scan = jiffies + msecs_to_jiffies(MM_CID_SCAN_DELAY); } - t->cid_work.next = &t->cid_work; /* Protect against double add */ - init_task_work(&t->cid_work, task_mm_cid_work); } -void task_tick_mm_cid(struct rq *rq, struct task_struct *curr) +/* Call only when curr is a user thread. */ +void task_queue_mm_cid(struct task_struct *curr) { - struct callback_head *work = &curr->cid_work; - unsigned long now = jiffies; - - if (!curr->mm || (curr->flags & (PF_EXITING | PF_KTHREAD)) || - work->next != work) - return; - if (time_before(now, READ_ONCE(curr->mm->mm_cid_next_scan))) - return; - - /* No page allocation under rq lock */ - task_work_add(curr, work, TWA_RESUME); + /* Ensure the mm exists when we run. */ + mmgrab(curr->mm); + queue_work(system_unbound_wq, &curr->mm->cid_work); } void sched_mm_cid_exit_signals(struct task_struct *t) diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index c8512a9fb0229..37a2e2328283e 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -3630,7 +3630,6 @@ extern int use_cid_lock; extern void sched_mm_cid_migrate_from(struct task_struct *t); extern void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t); -extern void task_tick_mm_cid(struct rq *rq, struct task_struct *curr); extern void init_sched_mm_cid(struct task_struct *t); static inline void __mm_cid_put(struct mm_struct *mm, int cid) @@ -3899,7 +3898,6 @@ static inline void switch_mm_cid(struct rq *rq, static inline void switch_mm_cid(struct rq *rq, struct task_struct *prev, struct task_struct *next) { } static inline void sched_mm_cid_migrate_from(struct task_struct *t) { } static inline void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t) { } -static inline void task_tick_mm_cid(struct rq *rq, struct task_struct *curr) { } static inline void init_sched_mm_cid(struct task_struct *t) { } #endif /* !CONFIG_SCHED_MM_CID */