From patchwork Mon Feb 10 07:57:00 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gabriele Monaco X-Patchwork-Id: 13967439 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA95DC02198 for ; Mon, 10 Feb 2025 07:57:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EF6166B007B; Mon, 10 Feb 2025 02:57:23 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EA6146B0083; Mon, 10 Feb 2025 02:57:23 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D6DFA6B0085; Mon, 10 Feb 2025 02:57:23 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id B52CB6B007B for ; Mon, 10 Feb 2025 02:57:23 -0500 (EST) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 5664D1C6012 for ; Mon, 10 Feb 2025 07:57:23 +0000 (UTC) X-FDA: 83103279966.08.26E7477 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf21.hostedemail.com (Postfix) with ESMTP id 8E8571C0003 for ; Mon, 10 Feb 2025 07:57:21 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=VJFn4tQf; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf21.hostedemail.com: domain of gmonaco@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=gmonaco@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1739174241; a=rsa-sha256; cv=none; b=QqrW/AtDRjaxjDnxMlf+VmArQg4du67kIzYa60r286xQsxZ4S/LEwtMIOiqFwDMo8hY3h0 YP1xpJjftHo0tFzs9ycQUHvPa7OVvMM0GK4xjyN55XZCBOb3B3KcG9TNk9T5Q0RFKznSUR NbyblCYBndyafXZ9OLmRfUA+FvioGBE= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=VJFn4tQf; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf21.hostedemail.com: domain of gmonaco@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=gmonaco@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1739174241; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=5B84Awu6JUpstdhEM0yzA51Ej2LPUV/BX8OiM6ds56M=; b=Y0YlgPsfwWk6YNRMu9WdQ0TprH4Co1yOYzQiYG/cJmNJ9GOChAWI1R+8apMlgEnzbg8VgL FY+6jh/5sTw6sWSmaH+MTnLw8hChUXKG12OdxkbUvX3QWGDdC6C+n8F12Dbkib+1vMfM+o 06rGQjuYTjiJ8EV7M5uuPTgsOW0ieSE= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1739174240; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5B84Awu6JUpstdhEM0yzA51Ej2LPUV/BX8OiM6ds56M=; b=VJFn4tQfI3OB5smSpi2iBysCWxWSrdhLPQRXN5NZ1wdplrEmShfqjQBwRutkMW195s3EpG jEo8ZfQ5ih39GhQ2/13jq0KHKGiu0/cLRodNYcAbtzSbmQ2cTytv7hXyNNLrr8IFOoOK8N c4gHhjY94dKHCQrTz/kwS9S95qURGtQ= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-657-YoSmS7_WMR-TI_Kzta_21Q-1; Mon, 10 Feb 2025 02:57:16 -0500 X-MC-Unique: YoSmS7_WMR-TI_Kzta_21Q-1 X-Mimecast-MFC-AGG-ID: YoSmS7_WMR-TI_Kzta_21Q Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 40BF01956095; Mon, 10 Feb 2025 07:57:14 +0000 (UTC) Received: from gmonaco-thinkpadt14gen3.rmtit.com (unknown [10.44.32.69]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 861531800358; Mon, 10 Feb 2025 07:57:10 +0000 (UTC) From: Gabriele Monaco To: Mathieu Desnoyers , Andrew Morton , Ingo Molnar , Peter Zijlstra , linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Marco Elver , Ingo Molnar , Gabriele Monaco Subject: [PATCH v5 1/3] sched: Compact RSEQ concurrency IDs with reduced threads and affinity Date: Mon, 10 Feb 2025 08:57:00 +0100 Message-ID: <20250210075703.79125-2-gmonaco@redhat.com> In-Reply-To: <20250210075703.79125-1-gmonaco@redhat.com> References: <20250210075703.79125-1-gmonaco@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 8E8571C0003 X-Stat-Signature: roi1pnpna1w7e6xmw4pkjywnd6mgyhxo X-Rspam-User: X-HE-Tag: 1739174241-180962 X-HE-Meta: U2FsdGVkX1+oV1YTAgFjL+aomeQLnHzhDGNKVU25J+nh4kAtmPNHk/zZF3YkQ+PzG5/XpGUFjrrEQHEuWKT2Ri6GyvAV91+D4n8cp5QAMWyO8PLLE6vG31x3XnCfDdGn0rbrunGvxdOCuj0c5TIoFUXqBQBTrE4hwX6C+9nVx2SOHif9hQZ1AG/qycrx05GOanUjqbaEdee9I2s578+h0kfCjxHQWbPOqtAiD/38mzON3n1q0aEuPxeqyKjaeYm3JMt8Ixl6s6xctvC7qOXmDQ68EabbeSgRRMrPGzT018JLKc81efi2nDStvtkYZCV3KtLL16RinnSO5/z4dGtIJBJW4BSxJwxM27KGJnizz1hWK/7OxtDfj/IVjwParA6yeaejWkIjm3LrCRl1WaA+bNCzVwt/A+G2Nc/ubUx2LyiXwPdj02ulWwYc+6URGExvQfLxNRPiy6sAS0uq/R7oJGpmnwix2E2rbUNxTfCdWqfBUJG2v3gJUSlJ/yVaNTsTqpz7R5zHOQXb9YL2FerucbaDAWGdbGRu0dQttEE+ubYfHa0i/ODA08eE15aJBL6/D0hwpsudpwhCyZD9UCsQwWSmoJxxQ6qikYv0lAAor/Bo7grsLaMdGaPEhdT1xPprTPCZttrngG2G+VUkW9Q1jLkEaDa+lyUZMODo3qv+vktCxZDYTDf2vvvCN7D/JUeHbetilRg36mPCuLtsJ5MVDv3yAiDm/0zRxqdL0X8XB/3ddjNov421vuyHXVxgrcq5ypjwOaDWY9ZYQ8PVSvhjdlAVw65ACB1w+W85k7+0vJOcrtDUHvPrinXxVy50oX8mEYGPHRSorvNKzKsgpa3HsLJWY4bKGo2q+PaIWp4Wvw2oxKJgyWYaR7l7oXs4AmtRs1qZ+Vk02n0ljSKYja4rTq2ylC9/F7l0BbMsmdat5yPxHgZdtE9sB2KOairJqtaC4zsZAdhuhU6CguCRxvo Id35yfbV AYJwDjxba60700rcOHwKPTjtm6RLvQfC6KobD7aVIxvkC8AzOhmxccpp09UkhHUEqxJVdin/HS4u4WvexSTN0dAEKiA4z3D0lfw02cZQzTmOXpk/gwdMG8OM5+RgkLYo6VOOStHjolSAvIJ7ic29yzQqVW9WfTZRZc2hZALFwqROrZ0epjiIJCC0oeXCO/Ac3ftJWBoxLVjDI2ILUP0BotM94frjZfcVkxt8sMj4dt1u6Aq5vdelJ1B3t4DNgcqDoxkBZ/GPYLanAB34OQlLIdQlbcm9KOcwpiUdd4BUpaVdZuOTHrWI6YWfBueqS1ghRh13U+ffF8V7YhIh3LtePYXG37nxpyXVdZfttRvS4CSfnXm+rXSxHM1elj3PF5r9bQHQFcdSxQ+KsjxZtZ570CKS5chCiIgw+QAiGeflqfnglaArTylaRJU3JOAu/tm9V70MrI3fFDGDRjru/J1x8PbcWDC6fRUihARu3u7D+0sgjlAI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Mathieu Desnoyers When a process reduces its number of threads or clears bits in its CPU affinity mask, the mm_cid allocation should eventually converge towards smaller values. However, the change introduced by: commit 7e019dcc470f ("sched: Improve cache locality of RSEQ concurrency IDs for intermittent workloads") adds a per-mm/CPU recent_cid which is never unset unless a thread migrates. This is a tradeoff between: A) Preserving cache locality after a transition from many threads to few threads, or after reducing the hamming weight of the allowed CPU mask. B) Making the mm_cid upper bounds wrt nr threads and allowed CPU mask easy to document and understand. C) Allowing applications to eventually react to mm_cid compaction after reduction of the nr threads or allowed CPU mask, making the tracking of mm_cid compaction easier by shrinking it back towards 0 or not. D) Making sure applications that periodically reduce and then increase again the nr threads or allowed CPU mask still benefit from good cache locality with mm_cid. Introduce the following changes: * After shrinking the number of threads or reducing the number of allowed CPUs, reduce the value of max_nr_cid so expansion of CID allocation will preserve cache locality if the number of threads or allowed CPUs increase again. * Only re-use a recent_cid if it is within the max_nr_cid upper bound, else find the first available CID. Fixes: 7e019dcc470f ("sched: Improve cache locality of RSEQ concurrency IDs for intermittent workloads") Cc: Peter Zijlstra (Intel) Cc: Marco Elver Cc: Ingo Molnar Tested-by: Gabriele Monaco Signed-off-by: Mathieu Desnoyers Signed-off-by: Gabriele Monaco --- include/linux/mm_types.h | 7 ++++--- kernel/sched/sched.h | 25 ++++++++++++++++++++++--- 2 files changed, 26 insertions(+), 6 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 6b27db7f94963..0234f14f2aa6b 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -875,10 +875,11 @@ struct mm_struct { */ unsigned int nr_cpus_allowed; /** - * @max_nr_cid: Maximum number of concurrency IDs allocated. + * @max_nr_cid: Maximum number of allowed concurrency + * IDs allocated. * - * Track the highest number of concurrency IDs allocated for the - * mm. + * Track the highest number of allowed concurrency IDs + * allocated for the mm. */ atomic_t max_nr_cid; /** diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 38e0e323dda26..606c96b74ebfa 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -3698,10 +3698,28 @@ static inline int __mm_cid_try_get(struct task_struct *t, struct mm_struct *mm) { struct cpumask *cidmask = mm_cidmask(mm); struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid; - int cid = __this_cpu_read(pcpu_cid->recent_cid); + int cid, max_nr_cid, allowed_max_nr_cid; + /* + * After shrinking the number of threads or reducing the number + * of allowed cpus, reduce the value of max_nr_cid so expansion + * of cid allocation will preserve cache locality if the number + * of threads or allowed cpus increase again. + */ + max_nr_cid = atomic_read(&mm->max_nr_cid); + while ((allowed_max_nr_cid = min_t(int, READ_ONCE(mm->nr_cpus_allowed), + atomic_read(&mm->mm_users))), + max_nr_cid > allowed_max_nr_cid) { + /* atomic_try_cmpxchg loads previous mm->max_nr_cid into max_nr_cid. */ + if (atomic_try_cmpxchg(&mm->max_nr_cid, &max_nr_cid, allowed_max_nr_cid)) { + max_nr_cid = allowed_max_nr_cid; + break; + } + } /* Try to re-use recent cid. This improves cache locality. */ - if (!mm_cid_is_unset(cid) && !cpumask_test_and_set_cpu(cid, cidmask)) + cid = __this_cpu_read(pcpu_cid->recent_cid); + if (!mm_cid_is_unset(cid) && cid < max_nr_cid && + !cpumask_test_and_set_cpu(cid, cidmask)) return cid; /* * Expand cid allocation if the maximum number of concurrency @@ -3709,8 +3727,9 @@ static inline int __mm_cid_try_get(struct task_struct *t, struct mm_struct *mm) * and number of threads. Expanding cid allocation as much as * possible improves cache locality. */ - cid = atomic_read(&mm->max_nr_cid); + cid = max_nr_cid; while (cid < READ_ONCE(mm->nr_cpus_allowed) && cid < atomic_read(&mm->mm_users)) { + /* atomic_try_cmpxchg loads previous mm->max_nr_cid into cid. */ if (!atomic_try_cmpxchg(&mm->max_nr_cid, &cid, cid + 1)) continue; if (!cpumask_test_and_set_cpu(cid, cidmask)) From patchwork Mon Feb 10 07:57:01 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gabriele Monaco X-Patchwork-Id: 13967440 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59E3BC0219D for ; Mon, 10 Feb 2025 07:57:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C10986B0083; Mon, 10 Feb 2025 02:57:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B73CE6B0085; Mon, 10 Feb 2025 02:57:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 99E8C6B0088; Mon, 10 Feb 2025 02:57:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 75BDA6B0083 for ; Mon, 10 Feb 2025 02:57:25 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 2D959A0682 for ; Mon, 10 Feb 2025 07:57:25 +0000 (UTC) X-FDA: 83103280050.04.1DF48AA Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf28.hostedemail.com (Postfix) with ESMTP id 78F36C0003 for ; Mon, 10 Feb 2025 07:57:23 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=FCb9meWa; spf=pass (imf28.hostedemail.com: domain of gmonaco@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=gmonaco@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1739174243; a=rsa-sha256; cv=none; b=d2dzKh//j8q5eEvtKykBFjh0XEYNnb8L1TwU46m78rMdmUbSPHiH7Zge2iQhrjqWa6gvWN Sew1MlA3h28jqN8pol2sErbKN9oQBBOcehsgW1SycsKNdh7leK/rfmwDVfA/v9U4/HoWVl VroYwId+txtptcXqLdxPKEQHTUeUbXI= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=FCb9meWa; spf=pass (imf28.hostedemail.com: domain of gmonaco@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=gmonaco@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1739174243; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=191fmzOplUZF3eh9uENAPy4J7EnijRk6+X9V7IdEjDw=; b=BG5ObYb004JQ5ukYo6tH8WUrOnhEefOeYrYDQQlO+JBcS1MTmmkk59o/n16XK1Sd2QhBG1 jJwXSSi5hr2RU8RRIF95dspkO5UaIabLLhSZ+Rk00OHG7BbnZjhoy3JJo4oCHNo3uH45Gm /f5Z4f6RxviR9sz2U6jOZaLwqV8wHds= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1739174242; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=191fmzOplUZF3eh9uENAPy4J7EnijRk6+X9V7IdEjDw=; b=FCb9meWam/BWumkiJ4TeFJywWtD1tHKXhk+YIB+GY9rVPhRoYFKCfdh1MStu42VYL+yXTw xp1xT9sPCC9+s5HewutWgYU8LNlwpKwKWcSGQ0EIPpALruXIoZxZPy7u81PW4o1mFM/Mqr 55B8Kv61TcbJmQSJHEuTELbczJVioFw= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-192-AZ7QjBytP0ekvYcyr7b2GQ-1; Mon, 10 Feb 2025 02:57:19 -0500 X-MC-Unique: AZ7QjBytP0ekvYcyr7b2GQ-1 X-Mimecast-MFC-AGG-ID: AZ7QjBytP0ekvYcyr7b2GQ Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id D7CA71956094; Mon, 10 Feb 2025 07:57:17 +0000 (UTC) Received: from gmonaco-thinkpadt14gen3.rmtit.com (unknown [10.44.32.69]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 1CE131800358; Mon, 10 Feb 2025 07:57:14 +0000 (UTC) From: Gabriele Monaco To: Mathieu Desnoyers , Andrew Morton , Ingo Molnar , Peter Zijlstra , linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Gabriele Monaco Subject: [PATCH v5 2/3] sched: Move task_mm_cid_work to mm delayed work Date: Mon, 10 Feb 2025 08:57:01 +0100 Message-ID: <20250210075703.79125-3-gmonaco@redhat.com> In-Reply-To: <20250210075703.79125-1-gmonaco@redhat.com> References: <20250210075703.79125-1-gmonaco@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 X-Stat-Signature: kmuj7zgh3uxaeojmmnyefwq78i8rbyu8 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 78F36C0003 X-Rspam-User: X-HE-Tag: 1739174243-341481 X-HE-Meta: U2FsdGVkX1+CYYvcyovBxZbNJU0Sd/X+YHntt8jinXjFcp5Es1Y5gLwOv28o0oVm/4cfJyL3ai+vHehDvVHe8CY+JO7+zLeO5e06k2vmBk5cVpVvDuTjQTrHXhyMoPrNU/Rn16OM9yZDAdlb32kWu4GWAigUNCYD0/xoGw9vlZq3QWBnZnvB8fcZyZkfSQICYRloTPMQktV3l83kjAx0ZGs1rt3lWuTu/Ch/g07lJKbRBfJYJray3ZyERv3aNx3BXB/IDq57A6UH0C5rC9zZSVRblN9WpzltXNOIzpwFlkKT/WTpDK683HjV/bg4tqigHIOa9f10qE4Yl0Lt34Oa7fq+Yt2ZvztWHIBTn+m4akGypw0QovwtgTuy2WDernc1zPDnQyWW/4V3h1br01s3r4PAUYbVW6JKFrCTVZktbCmdztztydMFdCbWQwhLj30JwEYwVFL8a38hl2D6aijD9YHud4+J6iTBDUJnu1Q9PeC1Tkn/MWs9fG16X1hHnFmXMOXityEvk4ipVUcGFizIuh7WOMhE1VSroW48kYQiRa7MZkeb/hZC4NnsvvmhgnqM06+2KeCA3F+KBd2+w+bFRSvxnFYzbZUl3AUP5E1qpjVbs6QG1bHTciKr+Cjr0RBaj4wjQmcXItrGD8z4zBSnB1b8z0CIJs0wx0cF13Rapy0aaEX3MKrzObH7bbiCIAIRZElk8mqowECSpSJXiHzhjTMDC2yCTTF4YI/4OlO4j4IebhMAi9wiqgEln2zCTcJ3bpPk81tM6hoVoEoKywayKN+VMbtHzR0LLZB0PqdzNajbiwoL9FUUyAM87W+7RL+YLVy+IOo8pjva4fkUFwOuQBzcHQwzK7JaO/KpbS9rDzNbQZbBrw3JRH3Hs1j8LDqDsAVwZgzkg2TQ3U4A23bZYVqh1KrXsYQVnwwMHSEm0N8TEzkB/lYtnXaJCb8vRZ2KEc0EI1MQpaM0YGVjSwT SnSD1t9U H1IazhG/KaHaBotMNURtoNbkfYTJDHG3+ZK8Ulr9amrZWIpz1iails6BZmRFA8NMR52wcAyHBoft5mTiaxWTY7Jk2uYFKU77yJ3/qqtP/GoYKeH6SWROCLaD/U17ZUYodXkF1BWZ2p7kuaLE26rKnXuMFyVTd2ODRLQp49NJK/CSS/OXlhrHk8el6OUzFytQq637jx1FXcFIdXlXATxP/29mfV8PGaiFFLH08hL5TfcF7uOzSRUhtBrV/gkNiTcOGSyLp/27ozxoMwnNFnyELc9tGECNJRf6OBBc7J/f751Y2BMI+nhCpGr+hYNFg48lV5VBeOcK3zu3ZRqIcXXoW3gDgdXr4I+oOue5oCRQflvLK+OfAcw4T5ki6X4TIMsLHpmaXEW0+ONScY+NtnxZBSBDQOhCRNdm8x9DPoXQTqs//sbc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently, the task_mm_cid_work function is called in a task work triggered by a scheduler tick to frequently compact the mm_cids of each process. This can delay the execution of the corresponding thread for the entire duration of the function, negatively affecting the response in case of real time tasks. In practice, we observe task_mm_cid_work increasing the latency of 30-35us on a 128 cores system, this order of magnitude is meaningful under PREEMPT_RT. Run the task_mm_cid_work in a new delayed work connected to the mm_struct rather than in the task context before returning to userspace. This delayed work is initialised while allocating the mm and disabled before freeing it, its execution is no longer triggered by scheduler ticks but run periodically based on the defined MM_CID_SCAN_DELAY. The main advantage of this change is that the function can be offloaded to a different CPU and even preempted by RT tasks. Moreover, this new behaviour could be more predictable with periodic tasks with short runtime, which may rarely run during a scheduler tick. Now, the work is always scheduled with the same periodicity for each mm (though the periodicity is not guaranteed due to interference from other tasks, but mm_cid compaction is mostly best effort). To avoid excessively increased runtime, we quickly return from the function if we have no work to be done (i.e. no mm_cid is allocated). This is helpful for tasks that sleep for a long time, but also for terminated task. We are no longer following the process' state, hence the function continues to run after a process terminates but before its mm is freed. Fixes: 223baf9d17f2 ("sched: Fix performance regression introduced by mm_cid") Reviewed-by: Mathieu Desnoyers Signed-off-by: Gabriele Monaco --- include/linux/mm_types.h | 16 ++++++---- include/linux/sched.h | 1 - kernel/sched/core.c | 66 +++++----------------------------------- kernel/sched/sched.h | 7 ----- 4 files changed, 18 insertions(+), 72 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 0234f14f2aa6b..3aeadb519cac5 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -861,12 +861,6 @@ struct mm_struct { * runqueue locks. */ struct mm_cid __percpu *pcpu_cid; - /* - * @mm_cid_next_scan: Next mm_cid scan (in jiffies). - * - * When the next mm_cid scan is due (in jiffies). - */ - unsigned long mm_cid_next_scan; /** * @nr_cpus_allowed: Number of CPUs allowed for mm. * @@ -889,6 +883,7 @@ struct mm_struct { * mm nr_cpus_allowed updates. */ raw_spinlock_t cpus_allowed_lock; + struct delayed_work mm_cid_work; #endif #ifdef CONFIG_MMU atomic_long_t pgtables_bytes; /* size of all page tables */ @@ -1180,11 +1175,16 @@ static inline void vma_iter_init(struct vma_iterator *vmi, #ifdef CONFIG_SCHED_MM_CID +#define SCHED_MM_CID_PERIOD_NS (100ULL * 1000000) /* 100ms */ +#define MM_CID_SCAN_DELAY 100 /* 100ms */ + enum mm_cid_state { MM_CID_UNSET = -1U, /* Unset state has lazy_put flag set. */ MM_CID_LAZY_PUT = (1U << 31), }; +extern void task_mm_cid_work(struct work_struct *work); + static inline bool mm_cid_is_unset(int cid) { return cid == MM_CID_UNSET; @@ -1257,12 +1257,16 @@ static inline int mm_alloc_cid_noprof(struct mm_struct *mm, struct task_struct * if (!mm->pcpu_cid) return -ENOMEM; mm_init_cid(mm, p); + INIT_DELAYED_WORK(&mm->mm_cid_work, task_mm_cid_work); + schedule_delayed_work(&mm->mm_cid_work, + msecs_to_jiffies(MM_CID_SCAN_DELAY)); return 0; } #define mm_alloc_cid(...) alloc_hooks(mm_alloc_cid_noprof(__VA_ARGS__)) static inline void mm_destroy_cid(struct mm_struct *mm) { + disable_delayed_work_sync(&mm->mm_cid_work); free_percpu(mm->pcpu_cid); mm->pcpu_cid = NULL; } diff --git a/include/linux/sched.h b/include/linux/sched.h index 9632e3318e0d6..515b15f946cac 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1397,7 +1397,6 @@ struct task_struct { int last_mm_cid; /* Most recent cid in mm */ int migrate_from_cpu; int mm_cid_active; /* Whether cid bitmap is active */ - struct callback_head cid_work; #endif struct tlbflush_unmap_batch tlb_ubc; diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 165c90ba64ea9..c65003ab8c55b 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4524,7 +4524,6 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p) p->wake_entry.u_flags = CSD_TYPE_TTWU; p->migration_pending = NULL; #endif - init_sched_mm_cid(p); } DEFINE_STATIC_KEY_FALSE(sched_numa_balancing); @@ -5662,7 +5661,6 @@ void sched_tick(void) resched_latency = cpu_resched_latency(rq); calc_global_load_tick(rq); sched_core_tick(rq); - task_tick_mm_cid(rq, donor); scx_tick(rq); rq_unlock(rq, &rf); @@ -10528,38 +10526,17 @@ static void sched_mm_cid_remote_clear_weight(struct mm_struct *mm, int cpu, sched_mm_cid_remote_clear(mm, pcpu_cid, cpu); } -static void task_mm_cid_work(struct callback_head *work) +void task_mm_cid_work(struct work_struct *work) { - unsigned long now = jiffies, old_scan, next_scan; - struct task_struct *t = current; struct cpumask *cidmask; - struct mm_struct *mm; + struct delayed_work *delayed_work = container_of(work, struct delayed_work, work); + struct mm_struct *mm = container_of(delayed_work, struct mm_struct, mm_cid_work); int weight, cpu; - SCHED_WARN_ON(t != container_of(work, struct task_struct, cid_work)); - - work->next = work; /* Prevent double-add */ - if (t->flags & PF_EXITING) - return; - mm = t->mm; - if (!mm) - return; - old_scan = READ_ONCE(mm->mm_cid_next_scan); - next_scan = now + msecs_to_jiffies(MM_CID_SCAN_DELAY); - if (!old_scan) { - unsigned long res; - - res = cmpxchg(&mm->mm_cid_next_scan, old_scan, next_scan); - if (res != old_scan) - old_scan = res; - else - old_scan = next_scan; - } - if (time_before(now, old_scan)) - return; - if (!try_cmpxchg(&mm->mm_cid_next_scan, &old_scan, next_scan)) - return; cidmask = mm_cidmask(mm); + /* Nothing to clear for now */ + if (cpumask_empty(cidmask)) + goto out; /* Clear cids that were not recently used. */ for_each_possible_cpu(cpu) sched_mm_cid_remote_clear_old(mm, cpu); @@ -10570,35 +10547,8 @@ static void task_mm_cid_work(struct callback_head *work) */ for_each_possible_cpu(cpu) sched_mm_cid_remote_clear_weight(mm, cpu, weight); -} - -void init_sched_mm_cid(struct task_struct *t) -{ - struct mm_struct *mm = t->mm; - int mm_users = 0; - - if (mm) { - mm_users = atomic_read(&mm->mm_users); - if (mm_users == 1) - mm->mm_cid_next_scan = jiffies + msecs_to_jiffies(MM_CID_SCAN_DELAY); - } - t->cid_work.next = &t->cid_work; /* Protect against double add */ - init_task_work(&t->cid_work, task_mm_cid_work); -} - -void task_tick_mm_cid(struct rq *rq, struct task_struct *curr) -{ - struct callback_head *work = &curr->cid_work; - unsigned long now = jiffies; - - if (!curr->mm || (curr->flags & (PF_EXITING | PF_KTHREAD)) || - work->next != work) - return; - if (time_before(now, READ_ONCE(curr->mm->mm_cid_next_scan))) - return; - - /* No page allocation under rq lock */ - task_work_add(curr, work, TWA_RESUME); +out: + schedule_delayed_work(delayed_work, msecs_to_jiffies(MM_CID_SCAN_DELAY)); } void sched_mm_cid_exit_signals(struct task_struct *t) diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 606c96b74ebfa..fc613d9090bed 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -3622,16 +3622,11 @@ extern void sched_dynamic_update(int mode); #ifdef CONFIG_SCHED_MM_CID -#define SCHED_MM_CID_PERIOD_NS (100ULL * 1000000) /* 100ms */ -#define MM_CID_SCAN_DELAY 100 /* 100ms */ - extern raw_spinlock_t cid_lock; extern int use_cid_lock; extern void sched_mm_cid_migrate_from(struct task_struct *t); extern void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t); -extern void task_tick_mm_cid(struct rq *rq, struct task_struct *curr); -extern void init_sched_mm_cid(struct task_struct *t); static inline void __mm_cid_put(struct mm_struct *mm, int cid) { @@ -3899,8 +3894,6 @@ static inline void switch_mm_cid(struct rq *rq, static inline void switch_mm_cid(struct rq *rq, struct task_struct *prev, struct task_struct *next) { } static inline void sched_mm_cid_migrate_from(struct task_struct *t) { } static inline void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t) { } -static inline void task_tick_mm_cid(struct rq *rq, struct task_struct *curr) { } -static inline void init_sched_mm_cid(struct task_struct *t) { } #endif /* !CONFIG_SCHED_MM_CID */ extern u64 avg_vruntime(struct cfs_rq *cfs_rq);