From patchwork Wed Feb 26 12:03:20 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13992202 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4D477C021BF for ; Wed, 26 Feb 2025 12:04:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 24A72280024; Wed, 26 Feb 2025 07:03:53 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 15AB0280037; Wed, 26 Feb 2025 07:03:53 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DDBC3280036; Wed, 26 Feb 2025 07:03:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 8E90B280034 for ; Wed, 26 Feb 2025 07:03:52 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 517A4121214 for ; Wed, 26 Feb 2025 12:03:52 +0000 (UTC) X-FDA: 83161961904.26.CC92942 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by imf21.hostedemail.com (Postfix) with ESMTP id 71F5A1C000A for ; Wed, 26 Feb 2025 12:03:50 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=none; spf=pass (imf21.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740571430; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references; bh=EMVQAAUeOtuRH6J4sQkG9jiokUi1LOM+qZAHHr0qxqY=; b=4ciLtEuZZitOT09HHjUgbtHAkTYoUwTZEsAhNWQgYVsy8xIsYOORwjR4fcLZEaUo9uf/a4 gjzeGMwvs4PUyB80yxFNmpKw8FNrKKtnr1ctj+ytfOihA1hnLRf/HwOD/okp9CQxb4L4L4 /01qL+48fL3XQuUG+CSlI4OYJVolkn8= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=none; spf=pass (imf21.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740571430; a=rsa-sha256; cv=none; b=cRrXhTdCzUQLAN9YVB0NfjF83ouJ8XTu/f3Nvm9Qo1T6Ms2KSt/LKTyvai6hEvFoiFqcup BoQZY4tSaBBLH/7CEy09FqWA1+NSUgLqwmFHGjxq43yWN+3g9oNedPA5wE7B7YdY1VFoKG 1K0CRVW/GCMxeI5io1ozbaV6t+8j+0E= X-AuditID: a67dfc5b-3e1ff7000001d7ae-1e-67bf032265ed From: Byungchul Park To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: kernel_team@skhynix.com, akpm@linux-foundation.org, vernhao@tencent.com, mgorman@techsingularity.net, hughd@google.com, willy@infradead.org, david@redhat.com, peterz@infradead.org, luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, rjgolo@gmail.com Subject: [RFC PATCH v12 based on v6.14-rc4 09/25] mm: introduce API to perform tlb shootdown on exit from page allocator Date: Wed, 26 Feb 2025 21:03:20 +0900 Message-Id: <20250226120336.29565-9-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20250226120336.29565-1-byungchul@sk.com> References: <20250226113024.GA1935@system.software.com> <20250226120336.29565-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrMLMWRmVeSWpSXmKPExsXC9ZZnoa4S8/50g6//9SzmrF/DZvF5wz82 i6/rfzFbPP3Ux2JxedccNot7a/6zWpzftZbVYsfSfUwWlw4sYLI43nuAyWL+vc9sFps3TWW2 OD5lKqPF7x9z2Bz4PL639rF47Jx1l91jwaZSj80rtDw2repk89j0aRK7x7tz59g9Tsz4zeLx ft9VNo+tv+w8GqdeY/P4vEkugCeKyyYlNSezLLVI3y6BK2PFldqCR6IVzRd7GRsYZwt1MXJy SAiYSHyb+ZAJxt59bwkLiM0moC5x48ZPZhBbRMBM4mDrH/YuRi4OZoFlTBJ7TzSwgSSEBSok psx4CtbAIqAqsX3PVkYQm1fAVOJUx1VGiKHyEqs3HAAbxAk06NO0Y2C9QgLJEjt//2ECGSoh cJtNoqnnJ1SDpMTBFTdYJjDyLmBkWMUolJlXlpuYmWOil1GZl1mhl5yfu4kRGNTLav9E72D8 dCH4EKMAB6MSD++DM3vThVgTy4orcw8xSnAwK4nwcmbuSRfiTUmsrEotyo8vKs1JLT7EKM3B oiTOa/StPEVIID2xJDU7NbUgtQgmy8TBKdXAyFo79fkXfhsfrtQZre0nLkU9SGi74MpcHvh1 RmPewXdTRS8zzs8XPHrQ1OlcSfW1VMWFm27IsfAw72q3/PRu/R7289tUw6+weRZmuV5yK7Ns bLi7ir/F/JZ+1vF0yWNerTYxBhvSHtXyvq47d8NpD1v2W/fXzn5L+B0msDbJzszY9PXhUx53 JZbijERDLeai4kQAYsuXG2YCAAA= X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrFLMWRmVeSWpSXmKPExsXC5WfdrKvEvD/dYHu/msWc9WvYLD5v+Mdm 8XX9L2aLp5/6WCwOzz3JanF51xw2i3tr/rNanN+1ltVix9J9TBaXDixgsjjee4DJYv69z2wW mzdNZbY4PmUqo8XvH3PYHPg9vrf2sXjsnHWX3WPBplKPzSu0PDat6mTz2PRpErvHu3Pn2D1O zPjN4vF+31U2j8UvPjB5bP1l59E49Rqbx+dNcgG8UVw2Kak5mWWpRfp2CVwZK67UFjwSrWi+ 2MvYwDhbqIuRk0NCwERi970lLCA2m4C6xI0bP5lBbBEBM4mDrX/Yuxi5OJgFljFJ7D3RwAaS EBaokJgy4ylYA4uAqsT2PVsZQWxeAVOJUx1XGSGGykus3nAAbBAn0KBP046B9QoJJEvs/P2H aQIj1wJGhlWMIpl5ZbmJmTmmesXZGZV5mRV6yfm5mxiBIbqs9s/EHYxfLrsfYhTgYFTi4X1w Zm+6EGtiWXFl7iFGCQ5mJRFezsw96UK8KYmVValF+fFFpTmpxYcYpTlYlMR5vcJTE4QE0hNL UrNTUwtSi2CyTBycUg2Ms6Jm1iuslbD7+TRqxjqltWF2tS7fb7ros1kcyAje+/sCS1/Vec9H 68OX3H9psLnN439Cv/Phr7584f+WHD3Zd90y9dXTifZ/VNu/rzomuyBFbYNf2w+5lZ9fZDPI e0TfYH67zGb2b09bc/tw1jvuKx5O/M/df6365LWXyW3Jfnas5+cc9WjXVGIpzkg01GIuKk4E AHMyoZlNAgAA X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: 71F5A1C000A X-Stat-Signature: g8qjmra1jndewj8j4ecg8ejeg1mia7oz X-Rspam-User: X-Rspamd-Server: rspam01 X-HE-Tag: 1740571430-660317 X-HE-Meta: U2FsdGVkX189hvu+apBZxMSY/NBll9cxbheWs5+FZoZzP1j8R1J0DK8hUg1htpDfFRdFX3/XflR0kIbRZzzGZAEzuK+yn6pAEO5owRGSUGDPw9E/91ZzcyU8kpXnJm0rf0lyU2qLeptc4Aq+rM20cx6h61JIpD5lEEEznXEqgygdhaYCtISgn3sR64QtX/jA+fQOxHjickVI0lxpmqfmYbzQ4nm+GRLJ+3kQYUDYv199UOiMRjmLHv94qiuR6SbFkzFP6TfbhSgdZYVOxaDtLxXBcEqysQ4GDMbh8q0IJhXf04g4zp5GiPSO/QV38x6HnDacBjVSgi/Xwx/PQMewAFgO5ActrFqw+KGE1HJCkK44sM/e3HWoG024F1WXKvP20cNM6mMZY4/3s1WiGiUYqrCpvj59XxV42J1lMaZ1J6PJmRXPxgOtwN7IsZNNTm+YV6VlaaJWSVPglZFPWcyDZaCMG1H15TaMZIgELcCE4ydOZxspymzICgvvDMw2eSjruBx8ahlRXoYInXbbyhBfZ8SVsqwD5H8FJQPcL4zalXGGKeA3cePF9au8O5V5iH/nrhDdJzo15KfhtnJ2/DDUFVoZs7Ed6ZykjDmAkoO7wjJtotkRp4l/LRkXmVfeh9pM+aqjhYwbk5NWX85+UTzy7XevYdo3+7zhvOfZ5/c6FNg7hiJtlN//+/tVRUVy5Uom2kKJZey+8Sk9Etk3J0b/chGkrzAWmnKIXn69Tbuc919InSxeaHyzm1sPGLi3UOSXrYk5FFqGhP3gIlXttLYpdn7XQzpQerI3UIs+HiqOOhO3wkswXRHhpzHijBo2ZhrNtWk3utK0uQDdeKCX47JKfgkVmSdKtcLTTgggEfMMbF7UZSqCpQt4zC/DvCDuPtaZhvp7gjcUQ5pmDj2kG7picTSsm6V4QHsyI8EVdNtdQzZTquBRQpGEpUU5fOPUF+vV/KpJD46vyH17XARh1jU JFA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Functionally, no change. This is a preparation for luf mechanism that performs tlb shootdown required on exit from page allocator. This patch introduced a new API rather than making use of existing try_to_unmap_flush() to avoid repeated and redundant tlb shootdown due to frequent page allocations during a session of batched unmap flush. Signed-off-by: Byungchul Park --- include/linux/sched.h | 1 + mm/internal.h | 4 ++++ mm/rmap.c | 20 ++++++++++++++++++++ 3 files changed, 25 insertions(+) diff --git a/include/linux/sched.h b/include/linux/sched.h index 9632e3318e0d6..86ef426644639 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1401,6 +1401,7 @@ struct task_struct { #endif struct tlbflush_unmap_batch tlb_ubc; + struct tlbflush_unmap_batch tlb_ubc_takeoff; /* Cache last used pipe for splice(): */ struct pipe_inode_info *splice_pipe; diff --git a/mm/internal.h b/mm/internal.h index 3333d8d461c2c..b52e14f86c436 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1252,6 +1252,7 @@ extern struct workqueue_struct *mm_percpu_wq; #ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH void try_to_unmap_flush(void); void try_to_unmap_flush_dirty(void); +void try_to_unmap_flush_takeoff(void); void flush_tlb_batched_pending(struct mm_struct *mm); void fold_batch(struct tlbflush_unmap_batch *dst, struct tlbflush_unmap_batch *src, bool reset); void fold_luf_batch(struct luf_batch *dst, struct luf_batch *src); @@ -1262,6 +1263,9 @@ static inline void try_to_unmap_flush(void) static inline void try_to_unmap_flush_dirty(void) { } +static inline void try_to_unmap_flush_takeoff(void) +{ +} static inline void flush_tlb_batched_pending(struct mm_struct *mm) { } diff --git a/mm/rmap.c b/mm/rmap.c index 74fbf6c2fb3a7..72c5e665e59a4 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -772,6 +772,26 @@ void fold_luf_batch(struct luf_batch *dst, struct luf_batch *src) read_unlock_irqrestore(&src->lock, flags); } +void try_to_unmap_flush_takeoff(void) +{ + struct tlbflush_unmap_batch *tlb_ubc = ¤t->tlb_ubc; + struct tlbflush_unmap_batch *tlb_ubc_takeoff = ¤t->tlb_ubc_takeoff; + + if (!tlb_ubc_takeoff->flush_required) + return; + + arch_tlbbatch_flush(&tlb_ubc_takeoff->arch); + + /* + * Now that tlb shootdown of tlb_ubc_takeoff has been performed, + * it's good chance to shrink tlb_ubc if possible. + */ + if (arch_tlbbatch_done(&tlb_ubc->arch, &tlb_ubc_takeoff->arch)) + reset_batch(tlb_ubc); + + reset_batch(tlb_ubc_takeoff); +} + /* * Flush TLB entries for recently unmapped pages from remote CPUs. It is * important if a PTE was dirty when it was unmapped that it's flushed