From patchwork Fri May 31 09:19:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13681405 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 838F7C27C50 for ; Fri, 31 May 2024 09:20:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 05CA76B00A2; Fri, 31 May 2024 05:20:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 00CDF6B00A3; Fri, 31 May 2024 05:20:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DA2AE6B00A4; Fri, 31 May 2024 05:20:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id B76E26B00A2 for ; Fri, 31 May 2024 05:20:21 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 68E8F120CDD for ; Fri, 31 May 2024 09:20:21 +0000 (UTC) X-FDA: 82178145042.27.4182CC1 Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by imf22.hostedemail.com (Postfix) with ESMTP id 936CCC0016 for ; Fri, 31 May 2024 09:20:19 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf22.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1717147219; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references; bh=ZCyuWiCSt+64WEjjRW+aCA/oEEYTlh28G/HkoubEy38=; b=wEvLZfMKHGq4+tMROS6QLzDIW+6wq629c/QzAr3sIIRoa38pMACdkBidHT2a2Fj19SUqT2 h3NGamw0RFRzb7EC+nz8L+ZvhGHNlZ37XNndXlGTE2tf1OPKZFNnQ646czKCqt3jXHuznE d5WF3Nx3PpVQZO/pfI6YiLuyuFn1czI= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf22.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1717147219; a=rsa-sha256; cv=none; b=IcrDcimIN1WvgJ07N/WnMO7lTZnCFnQTEQJjk3Y85D3buDyZrOHO+U9y5yfosY64G49GhP Io9Q7zHJVnlclDUFazOzqgPCxLL+IV34EGmwwNvmVjRtV5OdK7Yn1fvwlGNDnBKxjBZFLn xFq5BBbG//06I90q6dENdYsR+aBXbZc= X-AuditID: a67dfc5b-d85ff70000001748-66-6659964c5b63 From: Byungchul Park To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: kernel_team@skhynix.com, akpm@linux-foundation.org, ying.huang@intel.com, vernhao@tencent.com, mgorman@techsingularity.net, hughd@google.com, willy@infradead.org, david@redhat.com, peterz@infradead.org, luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, rjgolo@gmail.com Subject: [PATCH v11 08/12] mm/rmap: recognize read-only tlb entries during batched tlb flush Date: Fri, 31 May 2024 18:19:57 +0900 Message-Id: <20240531092001.30428-9-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240531092001.30428-1-byungchul@sk.com> References: <20240531092001.30428-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrKLMWRmVeSWpSXmKPExsXC9ZZnka7PtMg0g9ZmaYs569ewWXze8I/N 4sWGdkaLr+t/MVs8/dTHYnF51xw2i3tr/rNanN+1ltVix9J9TBaXDixgsjjee4DJYv69z2wW mzdNZbY4PmUqo8XvH0DFJ2dNZnEQ8Pje2sfisXPWXXaPBZtKPTav0PJYvOclk8emVZ1sHps+ TWL3eHfuHLvHiRm/WTzmnQz0eL/vKpvH1l92Ho1Tr7F5fN4kF8AXxWWTkpqTWZZapG+XwJXx bfd8poI7shXtxyexNzAekehi5OCQEDCReLiVsYuRE8yctn0VO4jNJqAucePGT2YQW0TATOJg 6x+wOLPAXSaJA/1sILawQIzEo33bwWwWAVWJntNtYDavgKnE2/s/2SFmykus3nAAbA4n0JwD f++A7RICqln0vxfI5gKqec8mcfz+WRaIBkmJgytusExg5F3AyLCKUSgzryw3MTPHRC+jMi+z Qi85P3cTIzDwl9X+id7B+OlC8CFGAQ5GJR7egIqINCHWxLLiytxDjBIczEoivL/SgUK8KYmV ValF+fFFpTmpxYcYpTlYlMR5jb6VpwgJpCeWpGanphakFsFkmTg4pRoYA2Oitgd+2sTw9Jj6 yuu/Ztw4ue9i7LXgkuCsosUdubeX9v5pVb0+8ea51WLfORzO53z28H5W+L5fsHdrx65/l6bd m7mu/6Bnz4oCX8ajRW5XPL6H/Ghmq1ryWDTz76t91949n7qhKuHp94lH15mXLDWPPHZn/7pr c83jWF8f8GJ/sF284sj74/OUWIozEg21mIuKEwHsDoh0eAIAAA== X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrLLMWRmVeSWpSXmKPExsXC5WfdrOszLTLNYO8tPos569ewWXze8I/N 4sWGdkaLr+t/MVs8/dTHYnF47klWi8u75rBZ3Fvzn9Xi/K61rBY7lu5jsrh0YAGTxfHeA0wW 8+99ZrPYvGkqs8XxKVMZLX7/ACo+OWsyi4Ogx/fWPhaPnbPusnss2FTqsXmFlsfiPS+ZPDat 6mTz2PRpErvHu3Pn2D1OzPjN4jHvZKDH+31X2TwWv/jA5LH1l51H49RrbB6fN8kF8Edx2aSk 5mSWpRbp2yVwZXzbPZ+p4I5sRfvxSewNjEckuhg5OSQETCSmbV/FDmKzCahL3LjxkxnEFhEw kzjY+gcszixwl0niQD8biC0sECPxaN92MJtFQFWi53QbmM0rYCrx9v5PdoiZ8hKrNxwAm8MJ NOfA3zuMILYQUM2i/72MExi5FjAyrGIUycwry03MzDHVK87OqMzLrNBLzs/dxAgM5GW1fybu YPxy2f0QowAHoxIPb0BFRJoQa2JZcWXuIUYJDmYlEd5f6UAh3pTEyqrUovz4otKc1OJDjNIc LErivF7hqQlCAumJJanZqakFqUUwWSYOTqkGRrO7b7ayVyxJYmDcwRJy9bDa/64XLV/WhnRt eDW7RJpTY97Mn4Wx84zdH5qUa1Z29f1ru1N35fbyk9xranRsOBhWHN5+0rm49C7byptvDuU+ OpS3xizJUqd8SaGcdpruagkO0xPTritXv7BOXTSx1WtLAeeeQLP/a9adMuWs3xrwbN/8Lx67 HyqxFGckGmoxFxUnAgB1VFSbYAIAAA== X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: 936CCC0016 X-Stat-Signature: hbi8c7qcfgfzzxhjx3yj7uyk6donzwgt X-Rspam-User: X-Rspamd-Server: rspam04 X-HE-Tag: 1717147219-322052 X-HE-Meta: U2FsdGVkX1+J3jz7q3iVu4sR3zBgHxX5fdIQ+54vEc/gUZ1Tb/fkL9wLIVsJf7RQVmIjZLjzVTNPkmkic4minnXSnua698BtB4c1XvA6m48ty9OrlJcIJ2u3SSMvaYRdNmD2tZQNysZvtbTP30eKdWh0GnIQeIgpUTmj7CBrNZlx0mrQQ6wBj73uU2Cbmo6Fk8GTBZ26LrJdxNpY3fO8i5+d5BCkbYDhUBjyyI0i/wYa3q74tbyI646tt3iTcsdjEIWer6sQwt8bj43m5uURuNcwX3BSzpuURc+2ty84qg+T4hoQBzx6ta6qag+uyx29Q32xqtXaQ5B0vOxE3V+Ds8+t1l60Sk9d7RN4GUAd10Aw3b4434fs2B7o8PYke7BcaGYzHsN49SB8RrpLYONMUXM/xCPAPhTlLvXXmHdACxjlB2G/7CUPIIAdmbW86XCN1ARhpRvqqh6WR7Zt0U5tYevc6Vz64Oio4IVNCpOv7GBHv7EOMJyvp8S4U3oPUE4JUaUlccd1uDSH9CAPvwsBv/zuQraEzedY8PIkoRzbybKDnkt09BY+rSD2OyoO+j0aeX2gdIVdCKnuw6C3uRk9wY3Fu+BBiybwERs4wy42go3FA9BPT4cJtxgqu9fYBDei3Sjw4l1oWGIaEgFl9aJkn5bXDxE1is5KovAzXGvWorEwEPftx8K7snmSdi9k7NfX5gWNA+ADewHcvDV/CSfthd+WW+W4MiN69usQOM3scWLQvHeBZlDoDUTjBWdb67hY2vrx79sdcq+z8cgQ+GF3PkGTGxcd2x7/gapRDwP9m4lVeNIlV7AAmaAdYZ69PjVSdFAb5IY5vapcFqFrgNvUSAUS8qtkCUAnoSOK6yNGu/nIYwlQ2/lJ+vGSZfgcczKosuIGYkPwlBVI5FQ4fWP8U/7Y/EWzI74bqtWGlq40Cte47l2sbrxge5mUsByjmB3lRAgnyI6/Orsn/PFjfjc 8pQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Functionally, no change. This is a preparation for luf mechanism that requires to recognize read-only tlb entries and handle them in a different way. The newly introduced API in this patch, fold_ubc(), will be used by luf mechanism. Signed-off-by: Byungchul Park --- include/linux/sched.h | 1 + mm/internal.h | 4 ++++ mm/rmap.c | 34 ++++++++++++++++++++++++++++++++-- 3 files changed, 37 insertions(+), 2 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index ab5a2ed79b88..d9722c014157 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1340,6 +1340,7 @@ struct task_struct { #endif struct tlbflush_unmap_batch tlb_ubc; + struct tlbflush_unmap_batch tlb_ubc_ro; unsigned short int ugen; /* Cache last used pipe for splice(): */ diff --git a/mm/internal.h b/mm/internal.h index dba6d0eb7b6d..ca6fb5b2a640 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1124,6 +1124,7 @@ extern struct workqueue_struct *mm_percpu_wq; void try_to_unmap_flush(void); void try_to_unmap_flush_dirty(void); void flush_tlb_batched_pending(struct mm_struct *mm); +void fold_ubc(struct tlbflush_unmap_batch *dst, struct tlbflush_unmap_batch *src); #else static inline void try_to_unmap_flush(void) { @@ -1134,6 +1135,9 @@ static inline void try_to_unmap_flush_dirty(void) static inline void flush_tlb_batched_pending(struct mm_struct *mm) { } +static inline void fold_ubc(struct tlbflush_unmap_batch *dst, struct tlbflush_unmap_batch *src) +{ +} #endif /* CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH */ extern const struct trace_print_flags pageflag_names[]; diff --git a/mm/rmap.c b/mm/rmap.c index a65a94aada8d..1a246788e867 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -634,6 +634,28 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio, } #ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH + +void fold_ubc(struct tlbflush_unmap_batch *dst, + struct tlbflush_unmap_batch *src) +{ + if (!src->flush_required) + return; + + /* + * Fold src to dst. + */ + arch_tlbbatch_fold(&dst->arch, &src->arch); + dst->writable = dst->writable || src->writable; + dst->flush_required = true; + + /* + * Reset src. + */ + arch_tlbbatch_clear(&src->arch); + src->flush_required = false; + src->writable = false; +} + /* * Flush TLB entries for recently unmapped pages from remote CPUs. It is * important if a PTE was dirty when it was unmapped that it's flushed @@ -643,7 +665,9 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio, void try_to_unmap_flush(void) { struct tlbflush_unmap_batch *tlb_ubc = ¤t->tlb_ubc; + struct tlbflush_unmap_batch *tlb_ubc_ro = ¤t->tlb_ubc_ro; + fold_ubc(tlb_ubc, tlb_ubc_ro); if (!tlb_ubc->flush_required) return; @@ -657,8 +681,9 @@ void try_to_unmap_flush(void) void try_to_unmap_flush_dirty(void) { struct tlbflush_unmap_batch *tlb_ubc = ¤t->tlb_ubc; + struct tlbflush_unmap_batch *tlb_ubc_ro = ¤t->tlb_ubc_ro; - if (tlb_ubc->writable) + if (tlb_ubc->writable || tlb_ubc_ro->writable) try_to_unmap_flush(); } @@ -675,13 +700,18 @@ void try_to_unmap_flush_dirty(void) static void set_tlb_ubc_flush_pending(struct mm_struct *mm, pte_t pteval, unsigned long uaddr) { - struct tlbflush_unmap_batch *tlb_ubc = ¤t->tlb_ubc; + struct tlbflush_unmap_batch *tlb_ubc; int batch; bool writable = pte_dirty(pteval); if (!pte_accessible(mm, pteval)) return; + if (pte_write(pteval)) + tlb_ubc = ¤t->tlb_ubc; + else + tlb_ubc = ¤t->tlb_ubc_ro; + arch_tlbbatch_add_pending(&tlb_ubc->arch, mm, uaddr); tlb_ubc->flush_required = true;