From patchwork Wed Feb 26 12:01:16 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13992175 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF0A8C021BF for ; Wed, 26 Feb 2025 12:01:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id ACA3628001B; Wed, 26 Feb 2025 07:01:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 99A0A280023; Wed, 26 Feb 2025 07:01:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 63A3B28001B; Wed, 26 Feb 2025 07:01:50 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 1809E280020 for ; Wed, 26 Feb 2025 07:01:50 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id B8792515A4 for ; Wed, 26 Feb 2025 12:01:49 +0000 (UTC) X-FDA: 83161956738.09.635AD8B Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by imf13.hostedemail.com (Postfix) with ESMTP id 31D6620037 for ; Wed, 26 Feb 2025 12:01:46 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=none; spf=pass (imf13.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740571307; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references; bh=VsBhFK+5BXnm6VlUr5DhbTwkXrQsT4fypjBw9yhcI50=; b=sxZ0quyi5IeN9+3LOBdUwHhJ5KsW2sXe+trWsLLySuknpS4iI/FaFN2jrG84QUhGDJiq5Y iIM0ZxwsfcHs9GVFS5VDojUH3PqxpVO1Fde2JSRjBI5bOZ3mGMRfd8Ib4yjAOxQ8dFiw5h ooVqSSUNXHfZM2sbU6JCSubFRvSeuX0= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=none; spf=pass (imf13.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740571307; a=rsa-sha256; cv=none; b=ywjT6f/XEe2nf8cak9UAq/v2cqK6SsDPJYT73aexyJ88sWrq2kC0MyQAWQeqlm8rYvJSiK PkAMy51sbFs2/yxu938hoyNPKEU2YcaEyuRsums8C0N2kST6soEqK13wjRCQQ4ins+FtnW NSU81P5mvSvSbdh1DIKt6KXP4z0GTGk= X-AuditID: a67dfc5b-3e1ff7000001d7ae-ef-67bf02a6edf0 From: Byungchul Park To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: kernel_team@skhynix.com, akpm@linux-foundation.org, vernhao@tencent.com, mgorman@techsingularity.net, hughd@google.com, willy@infradead.org, david@redhat.com, peterz@infradead.org, luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, rjgolo@gmail.com Subject: [RFC PATCH v12 based on mm-unstable as of Feb 21, 2025 09/25] mm: introduce API to perform tlb shootdown on exit from page allocator Date: Wed, 26 Feb 2025 21:01:16 +0900 Message-Id: <20250226120132.28469-9-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20250226120132.28469-1-byungchul@sk.com> References: <20250226113342.GB1935@system.software.com> <20250226120132.28469-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrCLMWRmVeSWpSXmKPExsXC9ZZnke4ypv3pBhuP6VrMWb+GzeLzhn9s Fl/X/2K2ePqpj8Xi8q45bBb31vxntTi/ay2rxY6l+5gsLh1YwGRxvPcAk8X8e5/ZLDZvmsps cXzKVEaL3z/msDnweXxv7WPx2DnrLrvHgk2lHptXaHlsWtXJ5rHp0yR2j3fnzrF7nJjxm8Xj /b6rbB5bf9l5NE69xubxeZNcAE8Ul01Kak5mWWqRvl0CV8aSPwEFj0Qrvq/9xtrAOFuoi5GT Q0LARGLj/6vsMPa8ecuZQWw2AXWJGzd+gtkiAmYSB1v/ANVwcTALLGOS2HuigQ3EERboZZRY cWQSC0gVi4CqxOnupWAdvAKmEvsffoeaKi+xesMBsDgn0KR/u3+DxYUEkiVa1v9mARkkIXCb TaL1zT4miAZJiYMrbrBMYORdwMiwilEoM68sNzEzx0QvozIvs0IvOT93EyMwrJfV/onewfjp QvAhRgEORiUe3gdn9qYLsSaWFVfmHmKU4GBWEuHlzNyTLsSbklhZlVqUH19UmpNafIhRmoNF SZzX6Ft5ipBAemJJanZqakFqEUyWiYNTqoFR+W7E08OnZlruC9nBcG3S7xuyudO3nSmcNDFb /KBH0r4bgSoSiY/cH0pcs1zwrDcotCGdbW7kBJaDphkurpr1vzi3Wn2MK5igU+lkm/b5Xka/ 0Z+7PYuvfOO98HGttJjcpmLhaVKKT41WqU9UOb5QWl/DOFm9QojJje3WypDP2g9YNAo+//6u xFKckWioxVxUnAgAHatWsWcCAAA= X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrNLMWRmVeSWpSXmKPExsXC5WfdrLuMaX+6wZ5pqhZz1q9hs/i84R+b xdf1v5gtnn7qY7E4PPckq8XlXXPYLO6t+c9qcX7XWlaLHUv3MVlcOrCAyeJ47wEmi/n3PrNZ bN40ldni+JSpjBa/f8xhc+D3+N7ax+Kxc9Zddo8Fm0o9Nq/Q8ti0qpPNY9OnSewe786dY/c4 MeM3i8f7fVfZPBa/+MDksfWXnUfj1GtsHp83yQXwRnHZpKTmZJalFunbJXBlLPkTUPBItOL7 2m+sDYyzhboYOTkkBEwk5s1bzgxiswmoS9y48RPMFhEwkzjY+oe9i5GLg1lgGZPE3hMNbCCO sEAvo8SKI5NYQKpYBFQlTncvBevgFTCV2P/wOzvEVHmJ1RsOgMU5gSb92/0bLC4kkCzRsv43 ywRGrgWMDKsYRTLzynITM3NM9YqzMyrzMiv0kvNzNzECg3RZ7Z+JOxi/XHY/xCjAwajEw/vg zN50IdbEsuLK3EOMEhzMSiK8nJl70oV4UxIrq1KL8uOLSnNSiw8xSnOwKInzeoWnJggJpCeW pGanphakFsFkmTg4pRoYm75PauJK1ZyWp3DielHqjImsW9tyNj1O29H1p8KkLvzX5rR/H/+6 yx959DKzfNHmZcq/rU+9OCVUObPZY03/lgkBry9Uubb/N+PZvkOrWN1y4bkf4Ufm8odd3vZi Sabe5nxPdZaPnqcWmpr9KdwfuPBv2TtJm3Xbs/rEzJXfn5gwQ+311pP35ZVYijMSDbWYi4oT AY/vV1BOAgAA X-CFilter-Loop: Reflected X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 31D6620037 X-Stat-Signature: tjwpucy7djgujw344qr6jpec881w6dfz X-HE-Tag: 1740571306-344884 X-HE-Meta: U2FsdGVkX18W2yXtXSfMRJBA+xv3094LO+8bIKNaOsb/hmyPuGIJG1+oZg37q7+eYdFKKN3Zn1MjnT/oN7/bQ7XpWU3woeUDXyzy5+oy0Dv+8x7JVCExdgM6Flm8MpBHP/e7uQFni48hzkzmCW+xG+AKatUI+98+a2I3XIVSt1pjklDVitJ5L2wo/NJtnztsmHiHEGaMXEbPkflUjpIn7Oh8L48SOMyakKCJ07qzwfIWgk0gUvkl0d8JadhMTMsETW+iFG9/ahaATYS2XeSsT9FMvnP70zFSlrL1AZCToutFmBzJZVyyIIfOMgVacyCrHUA6Zx68ioxtrLba/wS71jweehgFzIWZ4IE1YfLp7z9SvpQj3bctglJQIxM4YP0LnxuxYx8oIXRgokfwKQ1ALgFApG2oAIevGcdzk9CTuBKt2G052D8joxlvf1HDQdmSC2R67hp4Cv/LcCVjLfSjsQz4a8Y4vaJ2xJ/N1YkK3D/fFMc8313GBxkYvZrpLueoDyX5o2jlZqrd/hM7sPm90UIT9aVkMXrhwDuF1rHcOsNEPAQ9G+e9t6ATJguqzgIKnePxyg7GA9jMzSBUpgCaczdlS32LyHT5timxAA1wkp48suBMkAxgUWLjgb+81N9ZvTthfOQv0h2YVzSmi/qawHulnA7VjwQxL4L+0dWw+2eTLPaNtZRLg1Kncq2w5Qcgjox0YNupwOwXQwqdKfuUfDrU/lc419takPC2dRYP6HY/Q975f1pyS/FQ3zKiebgkWzlQjVX4a4sdhjbmb1mSDRL8RnrNej1PTJ0v9UPUR6lc3Lo1lZ2C5lMnW+blM5p37bRtDRTXrhBR8xcn+7P5BNJNwpKJZCmyNZK1gYo+1eW01pK8MGCN9WMj86NTdUfSQQY/q03yiC9iKNizWCBT0/Ism5M7s1wqtFSioGSwourqwqlPtNiqe+XdRF+B2SM/HoGwMhfJKQy69Y4WIyr iyKpQsQ1 t89K3cCwklRaaaXfzEqVSwUk2nQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Functionally, no change. This is a preparation for luf mechanism that performs tlb shootdown required on exit from page allocator. This patch introduced a new API rather than making use of existing try_to_unmap_flush() to avoid repeated and redundant tlb shootdown due to frequent page allocations during a session of batched unmap flush. Signed-off-by: Byungchul Park --- include/linux/sched.h | 1 + mm/internal.h | 4 ++++ mm/rmap.c | 20 ++++++++++++++++++++ 3 files changed, 25 insertions(+) diff --git a/include/linux/sched.h b/include/linux/sched.h index 9632e3318e0d6..86ef426644639 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1401,6 +1401,7 @@ struct task_struct { #endif struct tlbflush_unmap_batch tlb_ubc; + struct tlbflush_unmap_batch tlb_ubc_takeoff; /* Cache last used pipe for splice(): */ struct pipe_inode_info *splice_pipe; diff --git a/mm/internal.h b/mm/internal.h index 8ade04255dba3..8ad7e86c1c0e2 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1269,6 +1269,7 @@ extern struct workqueue_struct *mm_percpu_wq; #ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH void try_to_unmap_flush(void); void try_to_unmap_flush_dirty(void); +void try_to_unmap_flush_takeoff(void); void flush_tlb_batched_pending(struct mm_struct *mm); void fold_batch(struct tlbflush_unmap_batch *dst, struct tlbflush_unmap_batch *src, bool reset); void fold_luf_batch(struct luf_batch *dst, struct luf_batch *src); @@ -1279,6 +1280,9 @@ static inline void try_to_unmap_flush(void) static inline void try_to_unmap_flush_dirty(void) { } +static inline void try_to_unmap_flush_takeoff(void) +{ +} static inline void flush_tlb_batched_pending(struct mm_struct *mm) { } diff --git a/mm/rmap.c b/mm/rmap.c index ac450a45257f6..61366b4570c9a 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -772,6 +772,26 @@ void fold_luf_batch(struct luf_batch *dst, struct luf_batch *src) read_unlock_irqrestore(&src->lock, flags); } +void try_to_unmap_flush_takeoff(void) +{ + struct tlbflush_unmap_batch *tlb_ubc = ¤t->tlb_ubc; + struct tlbflush_unmap_batch *tlb_ubc_takeoff = ¤t->tlb_ubc_takeoff; + + if (!tlb_ubc_takeoff->flush_required) + return; + + arch_tlbbatch_flush(&tlb_ubc_takeoff->arch); + + /* + * Now that tlb shootdown of tlb_ubc_takeoff has been performed, + * it's good chance to shrink tlb_ubc if possible. + */ + if (arch_tlbbatch_done(&tlb_ubc->arch, &tlb_ubc_takeoff->arch)) + reset_batch(tlb_ubc); + + reset_batch(tlb_ubc_takeoff); +} + /* * Flush TLB entries for recently unmapped pages from remote CPUs. It is * important if a PTE was dirty when it was unmapped that it's flushed