From patchwork Thu Feb 20 05:20:10 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13983327 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 542F0C021B1 for ; Thu, 20 Feb 2025 05:21:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A5FB328029A; Thu, 20 Feb 2025 00:20:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7507928029C; Thu, 20 Feb 2025 00:20:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 01D2C28029C; Thu, 20 Feb 2025 00:20:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id C3F7728029C for ; Thu, 20 Feb 2025 00:20:45 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 64A034B794 for ; Thu, 20 Feb 2025 05:20:45 +0000 (UTC) X-FDA: 83139173250.09.04CBB72 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by imf01.hostedemail.com (Postfix) with ESMTP id 5307840008 for ; Thu, 20 Feb 2025 05:20:43 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf01.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740028843; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references; bh=vQFAHc+axUGA7yRBKGsJtD6KO/5bvpteh4R9TndyrU0=; b=j8n6CAU1eaWdngXffCjEqYijeSuRsLE4wrWf8nDYqWbjoDLA7yNHhenvm75x404P5SFjek X7pyu+Ja6CyKUnvH9AQRxzGGJaexhYSfR9iq9VkemgpFnOZQ+vZEvOIFuwBxjg1bd43t8e R3QilkQvr4sNgsUiR4keP8jNEDpdt7I= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740028843; a=rsa-sha256; cv=none; b=8DEhnDK24+bfFfdYC8o9tBqMwLtYyNGsZRTyYrGHuJOuTkaGlXKUFP/xecHzwort93YnEP pzqOkzhpsElCOlDddLOQNVFmcit9Mz4ocftN7+/9eofMZ6e55jIEnpPUam9mmQh2RFJK34 Toc9HeQE/wt/FCcKNSEvLpGa1Mx+1Wk= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf01.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com X-AuditID: a67dfc5b-3c9ff7000001d7ae-db-67b6bba6bdc3 From: Byungchul Park To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: kernel_team@skhynix.com, akpm@linux-foundation.org, ying.huang@intel.com, vernhao@tencent.com, mgorman@techsingularity.net, hughd@google.com, willy@infradead.org, david@redhat.com, peterz@infradead.org, luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, rjgolo@gmail.com Subject: [RFC PATCH v12 09/26] mm: introduce API to perform tlb shootdown on exit from page allocator Date: Thu, 20 Feb 2025 14:20:10 +0900 Message-Id: <20250220052027.58847-10-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20250220052027.58847-1-byungchul@sk.com> References: <20250220052027.58847-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrKLMWRmVeSWpSXmKPExsXC9ZZnoe6y3dvSDZqvaFjMWb+GzeLzhn9s Fi82tDNafF3/i9ni6ac+FovLu+awWdxb85/V4vyutawWO5buY7K4dGABk8Xx3gNMFvPvfWaz 2LxpKrPF8SlTGS1+/wAqPjlrMouDgMf31j4Wj52z7rJ7LNhU6rF5hZbH4j0vmTw2repk89j0 aRK7x7tz59g9Tsz4zeIx72Sgx/t9V9k8tv6y82iceo3N4/MmuQC+KC6blNSczLLUIn27BK6M +yvPMxc8Eq24MTG2gXG2UBcjJ4eEgInEjvZmRhj70fdrbCA2m4C6xI0bP5lBbBEBM4mDrX/Y QWxmgbtMEgf6wWqEBVIl9v3azwpiswioSnxZshKshheofsmViSwQM+UlVm84ADaHEyj+Y0Yv WK+QgKnEuwWXmLoYuYBq3rNJvNo0jQ2iQVLi4IobLBMYeRcwMqxiFMrMK8tNzMwx0cuozMus 0EvOz93ECAz8ZbV/oncwfroQfIhRgINRiYd3Ruu2dCHWxLLiytxDjBIczEoivG31W9KFeFMS K6tSi/Lji0pzUosPMUpzsCiJ8xp9K08REkhPLEnNTk0tSC2CyTJxcEo1MBaz89d1iDzqnOTv Wsv6OfT3ye8lpr67Dr3OvbT1yZ+oO7P1LnBV+f8qrBOOfT/Z5L92Z8dHsZRdZz4sVGV0u+Gl X9T0Up0zMH7hpd+Z50wTlAsjhWffMMx4L8zztVr4T8OtaC3lmTa/X7I5G16xjRdaffaukLwR b5auDzMT19VjPxg/mszrUmIpzkg01GIuKk4EAEubtG54AgAA X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrDLMWRmVeSWpSXmKPExsXC5WfdrLts97Z0g5UXZS3mrF/DZvF5wz82 ixcb2hktvq7/xWzx9FMfi8XhuSdZLS7vmsNmcW/Nf1aL87vWslrsWLqPyeLSgQVMFsd7DzBZ zL/3mc1i86apzBbHp0xltPj9A6j45KzJLA6CHt9b+1g8ds66y+6xYFOpx+YVWh6L97xk8ti0 qpPNY9OnSewe786dY/c4MeM3i8e8k4Ee7/ddZfNY/OIDk8fWX3YejVOvsXl83iQXwB/FZZOS mpNZllqkb5fAlXF/5XnmgkeiFTcmxjYwzhbqYuTkkBAwkXj0/RobiM0moC5x48ZPZhBbRMBM 4mDrH3YQm1ngLpPEgX6wGmGBVIl9v/azgtgsAqoSX5asBKvhBapfcmUiC8RMeYnVGw6AzeEE iv+Y0QvWKyRgKvFuwSWmCYxcCxgZVjGKZOaV5SZm5pjqFWdnVOZlVugl5+duYgSG8bLaPxN3 MH657H6IUYCDUYmH98HjrelCrIllxZW5hxglOJiVRHjb6rekC/GmJFZWpRblxxeV5qQWH2KU 5mBREuf1Ck9NEBJITyxJzU5NLUgtgskycXBKNTDufxh/WJi78Pbx/Q1Xyk4sWdB0wM5nl/fC WMbUo0+0dp+P49vOF9PKni/QUx93dW+8g63bNjeNJY/2/gvKbbkn5phUrT5RwKph5vfZ6T9j L4r4r9i/8bejsbnO569m9v7sB2tPuZhv2Nfy+Whv56JpMbfX/ZvQYuhtfuSggv/lWyxPbCTX Ln2jxFKckWioxVxUnAgA3SuVa18CAAA= X-CFilter-Loop: Reflected X-Rspam-User: X-Rspamd-Queue-Id: 5307840008 X-Rspamd-Server: rspam07 X-Stat-Signature: z1xobbf31yiipdwudehdgiq144cc43zm X-HE-Tag: 1740028843-471591 X-HE-Meta: U2FsdGVkX18kENZzc+2a9j1fLTIGOS8Vysf6jk83ki9ivvoP9ulitH6Z6EY5X2ZzHKJM0D9aKbyoa9VmYtDqHgp8BeQsXKOH1bARTi0qc66oCCJuUYgP5/fcj5w9I/QQz+rOvMqK8U6bWYlfXXbNzIKQjQkWD8hEmhJmKr5TeIk5bceBStHQH1X2UIFWZdcCDQZMIyDBcWYClZyVxNKxQBhtjbXpD28J9UfFmmGQQCwlytE8XuZMSKyoNiz3ctp0enqF2l0yjkHqbkYLbyEreiMSVJwK5JZy8Svo3OOsYdZbWbjx+BV4nlE34jWtYzgEARy1xwkw4g0eJOHIEer5h1Y0TNJfpYraoCL1205jj/bPnnf28s56qsMkEnnVqLWuTx4lseKMCpACa+wG161K7yxtblEla27Bp/dA9o4lKdmXrmiGvhaItMyiwkLpOzuuizUzskoqxDsjpnf4qupzSOw5LxIl6qS9IzYe0a2wzP3sKZMouOhjYhnYA/+zbj3XagJf0RRmW5/XUDU+trcS9QsJLBFTh+x04nvk0kkodLSW0fXgwjLtwfvfgBDNHFvV3tL072TVhEFPcVfsyj+ghB3aqYNkhV3afZp7KcU0FGJm0i/XvxWUsWR9KUXa6ugnEPCVukh1rEDmgsobja0Tvj1nyYilZH/LCdcgLKXSt6z1awA03wEyD79rJDru0ix6crvUlDL60NJV/TR5Rb5lbFCLB1NRAlO8JxpUOU7yIfc34KjtHKyQlcfbsMS2UShvcfcSDq/MOGA3DOTAh/adHVrx1gvFkcpLNzPAartLNv+yGlu1yYKzTOyUXowKYPFAzSNnW5IaaSQf2J/ivpV3nu22Dz9/lsFZfODUcEzHqPbTo04E8UtLX17zyCYQp+AjX+FI0AtVPE6f52MjKUqksKJhhRfPnzilkleUVUpjKE3GmUmoepe+7LX4yUKlGLmH/GzPovoukyf2JalFJPN aZg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Functionally, no change. This is a preparation for luf mechanism that performs tlb shootdown required on exit from page allocator. This patch introduced a new API rather than making use of existing try_to_unmap_flush() to avoid repeated and redundant tlb shootdown due to frequent page allocations during a session of batched unmap flush. Signed-off-by: Byungchul Park --- include/linux/sched.h | 1 + mm/internal.h | 4 ++++ mm/rmap.c | 20 ++++++++++++++++++++ 3 files changed, 25 insertions(+) diff --git a/include/linux/sched.h b/include/linux/sched.h index bb343136ddd05..8e6e7a83332cf 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1375,6 +1375,7 @@ struct task_struct { #endif struct tlbflush_unmap_batch tlb_ubc; + struct tlbflush_unmap_batch tlb_ubc_takeoff; /* Cache last used pipe for splice(): */ struct pipe_inode_info *splice_pipe; diff --git a/mm/internal.h b/mm/internal.h index b38a9ae9d6993..cbdebf8a02437 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1239,6 +1239,7 @@ extern struct workqueue_struct *mm_percpu_wq; #ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH void try_to_unmap_flush(void); void try_to_unmap_flush_dirty(void); +void try_to_unmap_flush_takeoff(void); void flush_tlb_batched_pending(struct mm_struct *mm); void fold_batch(struct tlbflush_unmap_batch *dst, struct tlbflush_unmap_batch *src, bool reset); void fold_luf_batch(struct luf_batch *dst, struct luf_batch *src); @@ -1249,6 +1250,9 @@ static inline void try_to_unmap_flush(void) static inline void try_to_unmap_flush_dirty(void) { } +static inline void try_to_unmap_flush_takeoff(void) +{ +} static inline void flush_tlb_batched_pending(struct mm_struct *mm) { } diff --git a/mm/rmap.c b/mm/rmap.c index 74fbf6c2fb3a7..72c5e665e59a4 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -772,6 +772,26 @@ void fold_luf_batch(struct luf_batch *dst, struct luf_batch *src) read_unlock_irqrestore(&src->lock, flags); } +void try_to_unmap_flush_takeoff(void) +{ + struct tlbflush_unmap_batch *tlb_ubc = ¤t->tlb_ubc; + struct tlbflush_unmap_batch *tlb_ubc_takeoff = ¤t->tlb_ubc_takeoff; + + if (!tlb_ubc_takeoff->flush_required) + return; + + arch_tlbbatch_flush(&tlb_ubc_takeoff->arch); + + /* + * Now that tlb shootdown of tlb_ubc_takeoff has been performed, + * it's good chance to shrink tlb_ubc if possible. + */ + if (arch_tlbbatch_done(&tlb_ubc->arch, &tlb_ubc_takeoff->arch)) + reset_batch(tlb_ubc); + + reset_batch(tlb_ubc_takeoff); +} + /* * Flush TLB entries for recently unmapped pages from remote CPUs. It is * important if a PTE was dirty when it was unmapped that it's flushed