From patchwork Wed Feb 26 12:03:25 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13992205 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F194CC021B8 for ; Wed, 26 Feb 2025 12:04:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1391928003B; Wed, 26 Feb 2025 07:03:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0BE69280037; Wed, 26 Feb 2025 07:03:55 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D93B328003B; Wed, 26 Feb 2025 07:03:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id A6C9A280039 for ; Wed, 26 Feb 2025 07:03:54 -0500 (EST) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 61536C135A for ; Wed, 26 Feb 2025 12:03:54 +0000 (UTC) X-FDA: 83161961988.29.7A28DF3 Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by imf27.hostedemail.com (Postfix) with ESMTP id 722284000C for ; Wed, 26 Feb 2025 12:03:52 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=none; spf=pass (imf27.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740571432; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references; bh=PvOUH4TkWjKdwJbsbFwUzHcppCyoZWq5ZTfmC93XvKY=; b=3YyRAptkBbQ7zMXW1ak8NlaAielumVC6BGeUPyUMfk16Ubh1WaFdxphiZUNb51NXv8AC4p WjNJsrR6Dqxafwr0Idt6gPSeG65imNNTkM5+l1gMNPx3Dioa0IZRWw5nzhY9Z7CvN6Jbjd uJyFHsaYnB351ft0xWQXCL2nZXIFavU= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=none; spf=pass (imf27.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740571432; a=rsa-sha256; cv=none; b=W+lPEYPydhhTOo2Dkxo3SxRSIhgfWA5ixo9zzO2U3zIBJZtX2oSug+tHSAK7nGuEsRR8a2 NuHi4nZAkZj0wqc0ahmHHxzvZzxXSPdp0LjN5GulrbjjZviKot3y5bPVQOCD90j7WGHoLJ hAbkJLu2ksXXjrs3DDQpAYsGZWlcCgE= X-AuditID: a67dfc5b-3e1ff7000001d7ae-38-67bf0322651c From: Byungchul Park To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: kernel_team@skhynix.com, akpm@linux-foundation.org, vernhao@tencent.com, mgorman@techsingularity.net, hughd@google.com, willy@infradead.org, david@redhat.com, peterz@infradead.org, luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, rjgolo@gmail.com Subject: [RFC PATCH v12 based on v6.14-rc4 14/25] mm/rmap: recognize read-only tlb entries during batched tlb flush Date: Wed, 26 Feb 2025 21:03:25 +0900 Message-Id: <20250226120336.29565-14-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20250226120336.29565-1-byungchul@sk.com> References: <20250226113024.GA1935@system.software.com> <20250226120336.29565-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrELMWRmVeSWpSXmKPExsXC9ZZnoa4y8/50g0OiFnPWr2Gz+LzhH5vF 1/W/mC2efupjsbi8aw6bxb01/1ktzu9ay2qxY+k+JotLBxYwWRzvPcBkMf/eZzaLzZumMlsc nzKV0eL3jzlsDnwe31v7WDx2zrrL7rFgU6nH5hVaHptWdbJ5bPo0id3j3blz7B4nZvxm8Xi/ 7yqbx9Zfdh6NU6+xeXzeJBfAE8Vlk5Kak1mWWqRvl8CV8WThU5aCk2IV2z+3szcwvhDqYuTk kBAwkZj0pI8Fxr73vY8dxGYTUJe4ceMnM4gtImAmcbD1D1Cci4NZYBmTxN4TDWwgCWGBYomL PxrAmlkEVCWarrxm6mLk4OAFavj90Q5iprzE6g0HwOZwAoU/TTsG1iokkCyx8/cfJpCZEgK3 2SRW9k5ih2iQlDi44gbLBEbeBYwMqxiFMvPKchMzc0z0MirzMiv0kvNzNzECQ3pZ7Z/oHYyf LgQfYhTgYFTi4X1wZm+6EGtiWXFl7iFGCQ5mJRFezsw96UK8KYmVValF+fFFpTmpxYcYpTlY lMR5jb6VpwgJpCeWpGanphakFsFkmTg4pRoYk3snaE1vrrgZpbz4y7S/uTJhisf/sywqC756 Onvdua8rZZ+K5jbE+7NMO5BcPunh+tjXG+0li+caKLe8cdnzonlaBaP+y82l55YfvKTOu/Ew 32qPa5emBW//w3f2QJbDzxBT+baXqxabRiuFzygN7zd93sY/609Q9vdPbXtFlp+w3eXfWM7V osRSnJFoqMVcVJwIAG1cVZtlAgAA X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrFLMWRmVeSWpSXmKPExsXC5WfdrKvEvD/dYO8VE4s569ewWXze8I/N 4uv6X8wWTz/1sVgcnnuS1eLyrjlsFvfW/Ge1OL9rLavFjqX7mCwuHVjAZHG89wCTxfx7n9ks Nm+aymxxfMpURovfP+awOfB7fG/tY/HYOesuu8eCTaUem1doeWxa1cnmsenTJHaPd+fOsXuc mPGbxeP9vqtsHotffGDy2PrLzqNx6jU2j8+b5AJ4o7hsUlJzMstSi/TtErgynix8ylJwUqxi ++d29gbGF0JdjJwcEgImEve+97GD2GwC6hI3bvxkBrFFBMwkDrb+AYpzcTALLGOS2HuigQ0k ISxQLHHxRwMLiM0ioCrRdOU1UxcjBwcvUMPvj3YQM+UlVm84ADaHEyj8adoxsFYhgWSJnb// ME1g5FrAyLCKUSQzryw3MTPHVK84O6MyL7NCLzk/dxMjMESX1f6ZuIPxy2X3Q4wCHIxKPLwP zuxNF2JNLCuuzD3EKMHBrCTCy5m5J12INyWxsiq1KD++qDQntfgQozQHi5I4r1d4aoKQQHpi SWp2ampBahFMlomDU6qBkeHtTas1d5snJTxrF8r+1NFylKNxrvaSF8zMYTy2a67s3xTJvS6v 0TDY0fDB+iClZJ+CKfMvcRxvu2Izz+VazBrl3WaX1hff5y+fefxDwJ03S1XyH00oeZ/u5NHM zcAgsS9toiWXKOvarjn+W7d2t5+Me+Nwycnp3QeR67Kx/v/4dI6cMZz8V4mlOCPRUIu5qDgR AGTi/ZhNAgAA X-CFilter-Loop: Reflected X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 722284000C X-Stat-Signature: cpga878ozzg78kiokmr9oft4f8rcxbnh X-HE-Tag: 1740571432-332951 X-HE-Meta: U2FsdGVkX19SxvgX4LAACLXCYUCeBoqh8FWXNS+p2DrkmXmW6g1COEnIzSrezwQ6Yr00gq97vh20GdLGXsSE/KCpKiJTyq5/iDKXGxCmFZqDcW/dTfKxHibVYC8wZTAZVC5Qduv0n6KOvtQKvmLCJnRyJV+8IEXPTXKQQPDVRc47lT3+mNFksQ9wmgIDqaT6422c+VzzGcV2wV3PmHJDbbbnkUumGx4mqhsRifXYaxVmD7sCSH71+sdJQusfFywz7+WQzj0FyyIFhqcbR8rUM6XjJGN7ljlVjMvrqZZgOqd6qSqBHAToOXwYrTCUKbB4dhhN2PhAdCHoxeDSX+XOZ9JDeJpkAzmnVfheIahN6r1rcrhKAJdq8/YkmvGNVFQEUykFHThpi8IRoN6NIn5/gcFel7bJmsCh+KKYGUHMrxpCOT65WvyskuGjfTZhSTuSdAT9otnGNChXq0HSY4CGK2hZyI2Xgq42h0uyFW7AEnGehf8YloXlbqbAEkdPPO0/RybjKiWAki6VNbQYw2hSKDj2zRQ6s1+sodqvJ4+cnIJcTDARifGrd2Aa8kYu1b4HBfA1zJDU4d1qhDpdsOGD0/RfUgzpLvt83zZRFyGdBduS+1/CQNsoUcEA6kICjpUQWzf2WoB763dZHPxNpv1IVCWjF71eHzFBM63uI+9GL5EDjmwai8nhmsLxmhHsz51PU/1pUkef0XiBTgy3uT6ChrVEdEEvWtrd939mJ3LA+BNMaNLAnT9Gf+TtwzSvYjxUUToDznfLTpv/kM8D6OAjgxyZ/hv/EC1V/guqISKBC05wYVjReiJpLYQM36sMRPm9orGIudJGJDK+FgdQH/f6zyDtLgRBkp14FNQWVbELg7GJb3FpQajccYDSpK+ii+E7xy5LwvYe+wc44uKxFDeRhvFCKTYiVo+HQeaBSWlRlrbic8x5oI42fJhAuWkc9MObMcISM+RxFJR5ZqN3MNN T5w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Functionally, no change. This is a preparation for luf mechanism that requires to recognize read-only tlb entries and handle them in a different way. The newly introduced API in this patch, fold_ubc(), will be used by luf mechanism. Signed-off-by: Byungchul Park --- include/linux/sched.h | 1 + mm/rmap.c | 16 ++++++++++++++-- 2 files changed, 15 insertions(+), 2 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index a3049ea5b3ad3..d1a3c97491ff2 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1407,6 +1407,7 @@ struct task_struct { struct tlbflush_unmap_batch tlb_ubc; struct tlbflush_unmap_batch tlb_ubc_takeoff; + struct tlbflush_unmap_batch tlb_ubc_ro; /* Cache last used pipe for splice(): */ struct pipe_inode_info *splice_pipe; diff --git a/mm/rmap.c b/mm/rmap.c index 1581b1a00f974..3ed6234dd777e 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -775,6 +775,7 @@ void fold_luf_batch(struct luf_batch *dst, struct luf_batch *src) void try_to_unmap_flush_takeoff(void) { struct tlbflush_unmap_batch *tlb_ubc = ¤t->tlb_ubc; + struct tlbflush_unmap_batch *tlb_ubc_ro = ¤t->tlb_ubc_ro; struct tlbflush_unmap_batch *tlb_ubc_takeoff = ¤t->tlb_ubc_takeoff; if (!tlb_ubc_takeoff->flush_required) @@ -789,6 +790,9 @@ void try_to_unmap_flush_takeoff(void) if (arch_tlbbatch_done(&tlb_ubc->arch, &tlb_ubc_takeoff->arch)) reset_batch(tlb_ubc); + if (arch_tlbbatch_done(&tlb_ubc_ro->arch, &tlb_ubc_takeoff->arch)) + reset_batch(tlb_ubc_ro); + reset_batch(tlb_ubc_takeoff); } @@ -801,7 +805,9 @@ void try_to_unmap_flush_takeoff(void) void try_to_unmap_flush(void) { struct tlbflush_unmap_batch *tlb_ubc = ¤t->tlb_ubc; + struct tlbflush_unmap_batch *tlb_ubc_ro = ¤t->tlb_ubc_ro; + fold_batch(tlb_ubc, tlb_ubc_ro, true); if (!tlb_ubc->flush_required) return; @@ -813,8 +819,9 @@ void try_to_unmap_flush(void) void try_to_unmap_flush_dirty(void) { struct tlbflush_unmap_batch *tlb_ubc = ¤t->tlb_ubc; + struct tlbflush_unmap_batch *tlb_ubc_ro = ¤t->tlb_ubc_ro; - if (tlb_ubc->writable) + if (tlb_ubc->writable || tlb_ubc_ro->writable) try_to_unmap_flush(); } @@ -831,13 +838,18 @@ void try_to_unmap_flush_dirty(void) static void set_tlb_ubc_flush_pending(struct mm_struct *mm, pte_t pteval, unsigned long uaddr) { - struct tlbflush_unmap_batch *tlb_ubc = ¤t->tlb_ubc; + struct tlbflush_unmap_batch *tlb_ubc; int batch; bool writable = pte_dirty(pteval); if (!pte_accessible(mm, pteval)) return; + if (pte_write(pteval)) + tlb_ubc = ¤t->tlb_ubc; + else + tlb_ubc = ¤t->tlb_ubc_ro; + arch_tlbbatch_add_pending(&tlb_ubc->arch, mm, uaddr); tlb_ubc->flush_required = true;