From patchwork Wed Feb 19 11:31:07 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gabriele Monaco X-Patchwork-Id: 13981963 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6CCFC021AA for ; Wed, 19 Feb 2025 11:31:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 268B244015B; Wed, 19 Feb 2025 06:31:26 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1F1FF440156; Wed, 19 Feb 2025 06:31:26 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 06B5E44015B; Wed, 19 Feb 2025 06:31:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id D68CF440156 for ; Wed, 19 Feb 2025 06:31:25 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 822B0C11BF for ; Wed, 19 Feb 2025 11:31:25 +0000 (UTC) X-FDA: 83136478530.17.48AE2D5 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf01.hostedemail.com (Postfix) with ESMTP id A4BED40008 for ; Wed, 19 Feb 2025 11:31:23 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=JNKngiZY; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf01.hostedemail.com: domain of gmonaco@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=gmonaco@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1739964683; a=rsa-sha256; cv=none; b=0uc8JiNF8OZi2TgEzBrYqRnsaA4ilGd98jPr0DdL6322NqG+JAojmEh2cUZiHYhd0s+RCv EajJ4UmR0FDXAXNlxeeYKDY4uekQa5QxTZt7pfRXlV1lL55Ne+hhjHBTxK+CKr3DHAUjXV qibK8X0fGS5mytLY40Wql5aAC3/BVtw= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=JNKngiZY; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf01.hostedemail.com: domain of gmonaco@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=gmonaco@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1739964683; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=0zM9gPlVGTUKv+OgzGI1YuB9XSe4IT1VlOATyZWpXw8=; b=ygxCRJ+H8l5doE24PI5ZLWjtjAwKkgIuQuD3jibPucG8xlgMFmGgZqVrB73kC6P46Oc6XV Vd6yLFAMVoCp5SNfDD8xbECdR63iBYLjyjy5iQM+G9I1DFQCLlWRMETS94oz8O/s0QPN9f Jo/3hyCwlCxg+4syy7Y2g5m+RpdAgS0= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1739964683; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0zM9gPlVGTUKv+OgzGI1YuB9XSe4IT1VlOATyZWpXw8=; b=JNKngiZY7ZF3bJgUNqwbllu5V87NYc9Dg9DlRNCP8b+klNct+Hp8ZA4s7q5tyTCInmwCwK X+SbQF2rJqpqte1PhrmiBkK1xCtS7F/zNenopz3bMBmTPQklv/FdnS5/EfNYvDosr7oU6d EdsjqxIFgr9+4GI09gOXOEEQ9rti6fY= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-600-2EmhS82hN9uOLSvxysTlxg-1; Wed, 19 Feb 2025 06:31:21 -0500 X-MC-Unique: 2EmhS82hN9uOLSvxysTlxg-1 X-Mimecast-MFC-AGG-ID: 2EmhS82hN9uOLSvxysTlxg_1739964680 Received: from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.17]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 5B53019783BB; Wed, 19 Feb 2025 11:31:19 +0000 (UTC) Received: from gmonaco-thinkpadt14gen3.rmtit.com (unknown [10.45.226.122]) by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 8870719560A3; Wed, 19 Feb 2025 11:31:15 +0000 (UTC) From: Gabriele Monaco To: linux-kernel@vger.kernel.org, Andrew Morton , Ingo Molnar , Peter Zijlstra , Mathieu Desnoyers , "Paul E. McKenney" , linux-mm@kvack.org Cc: Gabriele Monaco , Ingo Molnar , Shuah Khan Subject: [PATCH v7 1/2] sched: Move task_mm_cid_work to mm work_struct Date: Wed, 19 Feb 2025 12:31:07 +0100 Message-ID: <20250219113108.325545-2-gmonaco@redhat.com> In-Reply-To: <20250219113108.325545-1-gmonaco@redhat.com> References: <20250219113108.325545-1-gmonaco@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.17 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: A4BED40008 X-Stat-Signature: 1dw7mqbfqqjtznwmfijd97zfrbm9mfue X-Rspam-User: X-HE-Tag: 1739964683-706953 X-HE-Meta: U2FsdGVkX1+TExbDbUn6UZt8JDvm9lNv9pW4L5k0I5z9zAmctHh7/fTBIcFI+iguqf5u9z+cYiPChQROfaHt68EYzF+GgwlH1JfuAdhIX4fI1KJpks589zrSZ/xAm5ORH9ppBdWQ5b5TrQYhM9HLH9bhyy1XigjsASdhbOceu8PXW9/n1JUyMoy/on7i7um0uOOJRqL87v4bpZQ/f7Qdxy57h+KoEsah9jsiNEhEvQ+Pn6bpJIkVxLdS6DJFiR92Q8hYYcqVjiFXwqvA1fhEOCHMsQVZmznE5cZXHZj/tJoDAB7IPqyb3F4cBHBTHKyT17HrDDlZSj7ZkTmr47oYlvZJP+dwcxALv3oq1PsKd5UP7wD/BjEc2Cw117ef553X/9kQshku2KbhNP4mij3o8QisHDbFYa3MrTnixSwoe5XnIf0GUrRM4EmuABLTiiUtz6qHhGMYLolyQ2O0lYlE5/edtEa4i4eV37xw2eC/61YP6p/w+m1dE1I26YUTrwqUEA8tGXr4QcBM6e8X1o193zFH6V8fzkNhVK6buAqdjc+K7zMqrdSpTcT3lpF9SPo4fF+Xix16JLThOUcaEfIYwT0ZHApDapqGXczxgoD2lCvw9Y/txxWTUPCVGOQxN4IQeOCReHORbUtQcXE0ZoIdKaGN5Jj+uAEABsXmEr5m5uFepBwjrs9jeSyfjP8NtmabeShy/mhBXql5UFKNXc2ke3ACCzrtFxroLxVEHSjuC30bbe3kIUiqo86OwB99iuh3gtzzKDqEsj40t91ToBPo97dzgLKT3dvUzXYlBp2iQ+qDO8FSEjEp1qFo/hpX5fGw2grtbREb/tdaPqXHYdXIrE2mB2mm1viGF2iAuvD4OAmjBPMEpfiyJKLt2rPhcQhbWJtZKl+EfBwXfpA/BhaDSIWJiIKy6lhwu4ymY924Sujyy3OGdh5oTGedEOVy39yzy8/UHX5wtGCrdhCeDp5 39O6n8qF 3VDOOmOKWIiwXoJ4C/ZVDVjsstcmObw67NDkWXs2jjSZ+9RZiVJJ3bdqQTLfeXC2RPjL0iutHnHQXwOaC5zU67RxwjMIXDw92PK9HjXfueUKJun8dLVpIGrvxWbp1ibnGC/TWdiHjJrzG5TzChBNBqZefAFuFn3HWuQZbC6M1FlriRx9FzvBSTY3/dc8TzNjVp+ag7/U+H+j3+cQQXOkDiwrTDqjUv9C601bq2o6tRgZj3AYYz5YqDadP9avLQAWe3sc/aIQVQAvuur317qXqfavIIpIi7RCEvnxQ+XhI+RnKcrFwJDIDjGk/1ac2pEkSt0QA7bzvwWv34m9FknT3dMBqP9zoN2kUCwrKGoh4DCAEBGZKmfE3VbAmNg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently, the task_mm_cid_work function is called in a task work triggered by a scheduler tick to frequently compact the mm_cids of each process. This can delay the execution of the corresponding thread for the entire duration of the function, negatively affecting the response in case of real time tasks. In practice, we observe task_mm_cid_work increasing the latency of 30-35us on a 128 cores system, this order of magnitude is meaningful under PREEMPT_RT. Run the task_mm_cid_work in a new work_struct connected to the mm_struct rather than in the task context before returning to userspace. This work_struct is initialised with the mm and disabled before freeing it. Its execution is no longer triggered by scheduler ticks: the queuing of the work happens while returning to userspace in __rseq_handle_notify_resume, maintaining the checks to avoid running more frequently than MM_CID_SCAN_DELAY. The main advantage of this change is that the function can be offloaded to a different CPU and even preempted by RT tasks. Moreover, this new behaviour is more predictable with periodic tasks with short runtime, which may rarely run during a scheduler tick. Now, the work is always scheduled when the task returns to userspace. The work is disabled during mmdrop, since the function cannot sleep in all kernel configurations, we cannot wait for possibly running work items to terminate. We make sure the mm is valid in case the task is terminating by reserving it with mmgrab/mmdrop, returning prematurely if we are really the last user before mmgrab. This situation is unlikely since we don't schedule the work for exiting tasks, but we cannot rule it out. Fixes: 223baf9d17f2 ("sched: Fix performance regression introduced by mm_cid") Signed-off-by: Gabriele Monaco --- include/linux/mm_types.h | 8 ++++++++ include/linux/sched.h | 7 ++++++- kernel/rseq.c | 1 + kernel/sched/core.c | 33 ++++++++++++--------------------- kernel/sched/sched.h | 2 -- 5 files changed, 27 insertions(+), 24 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 0234f14f2aa6b..e748cf51e0c32 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -889,6 +889,10 @@ struct mm_struct { * mm nr_cpus_allowed updates. */ raw_spinlock_t cpus_allowed_lock; + /* + * @cid_work: Work item to run the mm_cid scan. + */ + struct work_struct cid_work; #endif #ifdef CONFIG_MMU atomic_long_t pgtables_bytes; /* size of all page tables */ @@ -1185,6 +1189,8 @@ enum mm_cid_state { MM_CID_LAZY_PUT = (1U << 31), }; +extern void task_mm_cid_work(struct work_struct *work); + static inline bool mm_cid_is_unset(int cid) { return cid == MM_CID_UNSET; @@ -1257,12 +1263,14 @@ static inline int mm_alloc_cid_noprof(struct mm_struct *mm, struct task_struct * if (!mm->pcpu_cid) return -ENOMEM; mm_init_cid(mm, p); + INIT_WORK(&mm->cid_work, task_mm_cid_work); return 0; } #define mm_alloc_cid(...) alloc_hooks(mm_alloc_cid_noprof(__VA_ARGS__)) static inline void mm_destroy_cid(struct mm_struct *mm) { + disable_work(&mm->cid_work); free_percpu(mm->pcpu_cid); mm->pcpu_cid = NULL; } diff --git a/include/linux/sched.h b/include/linux/sched.h index 9632e3318e0d6..2fd65f125153d 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1397,7 +1397,6 @@ struct task_struct { int last_mm_cid; /* Most recent cid in mm */ int migrate_from_cpu; int mm_cid_active; /* Whether cid bitmap is active */ - struct callback_head cid_work; #endif struct tlbflush_unmap_batch tlb_ubc; @@ -2254,4 +2253,10 @@ static __always_inline void alloc_tag_restore(struct alloc_tag *tag, struct allo #define alloc_tag_restore(_tag, _old) do {} while (0) #endif +#ifdef CONFIG_SCHED_MM_CID +extern void task_queue_mm_cid(struct task_struct *curr); +#else +static inline void task_queue_mm_cid(struct task_struct *curr) { } +#endif + #endif diff --git a/kernel/rseq.c b/kernel/rseq.c index 442aba29bc4cf..f8394ebbb6f4d 100644 --- a/kernel/rseq.c +++ b/kernel/rseq.c @@ -419,6 +419,7 @@ void __rseq_handle_notify_resume(struct ksignal *ksig, struct pt_regs *regs) } if (unlikely(rseq_update_cpu_node_id(t))) goto error; + task_queue_mm_cid(t); return; error: diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 9aecd914ac691..ee35f9962444b 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -5663,7 +5663,6 @@ void sched_tick(void) resched_latency = cpu_resched_latency(rq); calc_global_load_tick(rq); sched_core_tick(rq); - task_tick_mm_cid(rq, donor); scx_tick(rq); rq_unlock(rq, &rf); @@ -10530,22 +10529,16 @@ static void sched_mm_cid_remote_clear_weight(struct mm_struct *mm, int cpu, sched_mm_cid_remote_clear(mm, pcpu_cid, cpu); } -static void task_mm_cid_work(struct callback_head *work) +void task_mm_cid_work(struct work_struct *work) { unsigned long now = jiffies, old_scan, next_scan; - struct task_struct *t = current; struct cpumask *cidmask; - struct mm_struct *mm; + struct mm_struct *mm = container_of(work, struct mm_struct, cid_work); int weight, cpu; - SCHED_WARN_ON(t != container_of(work, struct task_struct, cid_work)); - - work->next = work; /* Prevent double-add */ - if (t->flags & PF_EXITING) - return; - mm = t->mm; - if (!mm) + if (!atomic_read(&mm->mm_count)) return; + mmgrab(mm); old_scan = READ_ONCE(mm->mm_cid_next_scan); next_scan = now + msecs_to_jiffies(MM_CID_SCAN_DELAY); if (!old_scan) { @@ -10558,9 +10551,9 @@ static void task_mm_cid_work(struct callback_head *work) old_scan = next_scan; } if (time_before(now, old_scan)) - return; + goto out_drop; if (!try_cmpxchg(&mm->mm_cid_next_scan, &old_scan, next_scan)) - return; + goto out_drop; cidmask = mm_cidmask(mm); /* Clear cids that were not recently used. */ for_each_possible_cpu(cpu) @@ -10572,6 +10565,8 @@ static void task_mm_cid_work(struct callback_head *work) */ for_each_possible_cpu(cpu) sched_mm_cid_remote_clear_weight(mm, cpu, weight); +out_drop: + mmdrop(mm); } void init_sched_mm_cid(struct task_struct *t) @@ -10584,23 +10579,19 @@ void init_sched_mm_cid(struct task_struct *t) if (mm_users == 1) mm->mm_cid_next_scan = jiffies + msecs_to_jiffies(MM_CID_SCAN_DELAY); } - t->cid_work.next = &t->cid_work; /* Protect against double add */ - init_task_work(&t->cid_work, task_mm_cid_work); } -void task_tick_mm_cid(struct rq *rq, struct task_struct *curr) +void task_queue_mm_cid(struct task_struct *curr) { - struct callback_head *work = &curr->cid_work; + struct work_struct *work = &curr->mm->cid_work; unsigned long now = jiffies; - if (!curr->mm || (curr->flags & (PF_EXITING | PF_KTHREAD)) || - work->next != work) + if (!curr->mm || (curr->flags & (PF_EXITING | PF_KTHREAD))) return; if (time_before(now, READ_ONCE(curr->mm->mm_cid_next_scan))) return; - /* No page allocation under rq lock */ - task_work_add(curr, work, TWA_RESUME); + schedule_work(work); } void sched_mm_cid_exit_signals(struct task_struct *t) diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index c8512a9fb0229..37a2e2328283e 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -3630,7 +3630,6 @@ extern int use_cid_lock; extern void sched_mm_cid_migrate_from(struct task_struct *t); extern void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t); -extern void task_tick_mm_cid(struct rq *rq, struct task_struct *curr); extern void init_sched_mm_cid(struct task_struct *t); static inline void __mm_cid_put(struct mm_struct *mm, int cid) @@ -3899,7 +3898,6 @@ static inline void switch_mm_cid(struct rq *rq, static inline void switch_mm_cid(struct rq *rq, struct task_struct *prev, struct task_struct *next) { } static inline void sched_mm_cid_migrate_from(struct task_struct *t) { } static inline void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t) { } -static inline void task_tick_mm_cid(struct rq *rq, struct task_struct *curr) { } static inline void init_sched_mm_cid(struct task_struct *t) { } #endif /* !CONFIG_SCHED_MM_CID */