From patchwork Mon Jan 22 01:00:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13524732 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1582C47422 for ; Mon, 22 Jan 2024 02:48:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4B1226B0071; Sun, 21 Jan 2024 21:48:11 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 439F66B0072; Sun, 21 Jan 2024 21:48:11 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2DAD66B0074; Sun, 21 Jan 2024 21:48:11 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 1C36C6B0071 for ; Sun, 21 Jan 2024 21:48:11 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id B5AB9A03F2 for ; Mon, 22 Jan 2024 02:48:10 +0000 (UTC) X-FDA: 81705412740.05.5BD3121 Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by imf08.hostedemail.com (Postfix) with ESMTP id DBE6B16001D for ; Mon, 22 Jan 2024 02:48:08 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf08.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1705891689; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references; bh=deHIYqkuaMMowWAwSsScVOmIndLCnwrqbZwxQiYz+QA=; b=76nrReElZE0tzwki3e+3Arz9LpTMrbYfoL9nIkGX83UQiCXRFjNPnUsuaMcLbsc1ACuyC2 eWLY5J7qxarnKvWtSI7zlMGdlra/+XG46vtqKSuukVSud7NsX8tRNWgq06tPub13OLiptW 2UYZDvLE23VNC4NgWbiH/h+NFjoNwAo= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf08.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1705891689; a=rsa-sha256; cv=none; b=fpj7iVkmx8YAP0Jgbn6tELg6PNTNcVWT+ogbsC9O5ES6mLWdTswTYHUhUGzL00TJNdf6QU LkIMbkNxJZR4ZpVjy6FJPFcWyTVDV5enurnxE1eR08Kuv4MKfBMDbP9dtzcW/3VkZ/w6Pa pkIOu+8Q8De9/YThYoi7WFnKHwCPXXs= X-AuditID: a67dfc5b-d85ff70000001748-77-65adbe43af5d From: Byungchul Park To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: kernel_team@skhynix.com, akpm@linux-foundation.org, ying.huang@intel.com, namit@vmware.com, vernhao@tencent.com, mgorman@techsingularity.net, hughd@google.com, willy@infradead.org, david@redhat.com, peterz@infradead.org, luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, rjgolo@gmail.com Subject: [PATCH v7 3/8] mm/rmap: Recognize read-only TLB entries during batched TLB flush Date: Mon, 22 Jan 2024 10:00:35 +0900 Message-Id: <20240122010040.74346-4-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240122010040.74346-1-byungchul@sk.com> References: <20240122010040.74346-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrHLMWRmVeSWpSXmKPExsXC9ZZnka7zvrWpBre+KlnMWb+GzeLzhn9s Fi82tDNafF3/i9ni6ac+FovLu+awWdxb85/V4vyutawWO5buY7K4dGABk8X1XQ8ZLY73HmCy mH/vM5vF5k1TmS2OT5nKaPH7B1DHyVmTWRwEPb639rF47Jx1l91jwaZSj80rtDwW73nJ5LFp VSebx6ZPk9g93p07x+5xYsZvFo95JwM93u+7yuax9ZedR+PUa2wenzfJebyb/5YtgD+KyyYl NSezLLVI3y6BK+PrPKeC49IVHz7tYGxgfCDWxcjJISFgItG+6AVjFyMHmN01JQEkzCagLnHj xk9mEFtEwEziYOsf9i5GLg5mgY9MEqu/d7CAJIQFIiWuvPkMVsQioCqxc9ljZpA5vAKmEvda uCDGy0us3nAArIQTaM6UX11MILYQUMncDzsZQWZKCDSzS2zuP8MO0SApcXDFDZYJjLwLGBlW MQpl5pXlJmbmmOhlVOZlVugl5+duYgTGwrLaP9E7GD9dCD7EKMDBqMTD68C+NlWINbGsuDL3 EKMEB7OSCC+/6qpUId6UxMqq1KL8+KLSnNTiQ4zSHCxK4rxG38pThATSE0tSs1NTC1KLYLJM HJxSDYyLj7dlnOONTfeyrlUNXfxgcpBxyxJBcdvn18oaUntkTq8vSIwIYv6V7yf7dVvzum8/ 2+PileyNgiZyqjdF77DjXrJ9362pT27oXN5Y1fgh/P5sp+3LFDaUn9nzrVVn2ccZi59ZTGs0 0AudNd9Jew9nWKKPxZQah6D1h5dGSswR4H/1y7b5x0olluKMREMt5qLiRAClHd/2gQIAAA== X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrKLMWRmVeSWpSXmKPExsXC5WfdrOu8b22qwf5v4hZz1q9hs/i84R+b xYsN7YwWX9f/YrZ4+qmPxeLw3JOsFpd3zWGzuLfmP6vF+V1rWS12LN3HZHHpwAImi+u7HjJa HO89wGQx/95nNovNm6YyWxyfMpXR4vcPoI6TsyazOAh5fG/tY/HYOesuu8eCTaUem1doeSze 85LJY9OqTjaPTZ8msXu8O3eO3ePEjN8sHvNOBnq833eVzWPxiw9MHlt/2Xk0Tr3G5vF5k5zH u/lv2QIEorhsUlJzMstSi/TtErgyvs5zKjguXfHh0w7GBsYHYl2MHBwSAiYSXVMSuhg5OdgE 1CVu3PjJDGKLCJhJHGz9w97FyMXBLPCRSWL19w4WkISwQKTElTefwYpYBFQldi57zAwyh1fA VOJeCxdIWEJAXmL1hgNgJZxAc6b86mICsYWASuZ+2Mk4gZFrASPDKkaRzLyy3MTMHFO94uyM yrzMCr3k/NxNjMDAXlb7Z+IOxi+X3Q8xCnAwKvHwOrCvTRViTSwrrsw9xCjBwawkwsuvuipV iDclsbIqtSg/vqg0J7X4EKM0B4uSOK9XeGqCkEB6YklqdmpqQWoRTJaJg1OqgXH1b4H7U2L1 xfrqGgT3SCwsabBvOXDW3zi6j/3ttJuH3uSufiqcwSStlBP25lvu5Ta9q+vPvLeLipD4fsnp 6nOuIyXL65cZX2ROL/G57i/ibluhZda/wb9jpd4KHocnz1JEqj8ZGbNdFW9O9cuurvk0Ia7E uW0q+1Y2iYLWfiP5JeLsbHamSizFGYmGWsxFxYkAE0ppnGgCAAA= X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: DBE6B16001D X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: 8p4u899y35rc7t3woswumtfm8epwqmhe X-HE-Tag: 1705891688-861057 X-HE-Meta: U2FsdGVkX19X7gYoCClV70zdWjvm85y63OwbaocVrV6qEeDs+GMWleayycvosQCdtRflrYjiTzZ3MKaJpxyjoKVn8MIghZZBrolPo/wXLm1yiWL0gMscsw1D4hMbtUxIW7QBL+ibveqej2SlIvcNRpOB4wVHIsi7CfBHlKTr7MpupYaQzfOcoYWy2yUqJl9l/rakYbQs0Qmo+aQ2GcNGRjMH2j70bV6wtZMoLYsmDOfsajRP5pT948CcNyf3ff2i7UOkY6FTRf0nzbMdHD6K+HVgXZmBk+JicwW/toog4NCwa+dptIpyK9tH8AaC+bHaSukrikRHvpc7ueDNSBU/KRH2GjB7m6rtpl6uyVVWj1FE7Xk0+UdA4+hYWZjFvudKnY9+p6ZUrNmMzJN/UyPKn8T0dvprRM5BBOBeGmIw+9slckZZyudMrE7VBY5XnnWToWyAAqDcqQJnFwp1tEoBIIuGb6Youv19dmkSEZjeJNTYYbdR+bS48yZ0Bq5vEsemuJZwxemQ7BBLidQAfkiP/8cXN5AiZyhXHTudqChCWmo32qiV1H/SYGF4fHvhgaWy148ouWMDKzjsoNxAWAB0GXChlZuv3+8CZUcw94hM4gj6gnUCmC11LdH6UGYBUveZ5N8i6wtWdRNDL9mFjllGguBYxURvDST7jIzy3H1O12U8ZY32LajE8WDOy/xvP/DDbzf/jI2Wdk9DX63ZzN3uFVsw00gjy8ur4Lxm+Zm8jRfDcEMTtXrRxfzsOXRO9KA179yjg0iECVLmxfyAp0fDnDRr4IYB4Te83SYEFAlx9uz9s9nmD2CwdqegFl9BsrvYUSbfLsIO+UtRDsJjokgRMJyBu8meDm0yPiyEN9KFsirv4s7yWvqPxmCK5vxJ7Bz+bpanWlgz15atNoCPoUAy26GIbHC853rU6k/ZV7EN+3p/8Z01xprILtAQgrnR4uffmDUHHQ9gZQCmfrHhXDn 2rg7eIB0 2kdlC4eah5dXgSdnGWje+JzmebI1hi2GT7LkFAroVlCp8Vx7ZfcTTvSFjyFNpt04bOF/75Yd75v5Tdx0r/QeALog5H3ZdDiTUC1mZ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Functionally, no change. This is a preparation for migrc mechanism that requires to recognize read-only TLB entries and makes use of them to batch more aggressively. Plus, the newly introduced API, fold_ubc() will be used by migrc mechanism when manipulating tlb batch data. Signed-off-by: Byungchul Park --- include/linux/sched.h | 1 + mm/internal.h | 4 ++++ mm/rmap.c | 31 ++++++++++++++++++++++++++++++- 3 files changed, 35 insertions(+), 1 deletion(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 292c31697248..0317e7a65151 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1328,6 +1328,7 @@ struct task_struct { #endif struct tlbflush_unmap_batch tlb_ubc; + struct tlbflush_unmap_batch tlb_ubc_ro; /* Cache last used pipe for splice(): */ struct pipe_inode_info *splice_pipe; diff --git a/mm/internal.h b/mm/internal.h index b61034bd50f5..b880f1e78700 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -923,6 +923,7 @@ extern struct workqueue_struct *mm_percpu_wq; void try_to_unmap_flush(void); void try_to_unmap_flush_dirty(void); void flush_tlb_batched_pending(struct mm_struct *mm); +void fold_ubc(struct tlbflush_unmap_batch *dst, struct tlbflush_unmap_batch *src); #else static inline void try_to_unmap_flush(void) { @@ -933,6 +934,9 @@ static inline void try_to_unmap_flush_dirty(void) static inline void flush_tlb_batched_pending(struct mm_struct *mm) { } +static inline void fold_ubc(struct tlbflush_unmap_batch *dst, struct tlbflush_unmap_batch *src) +{ +} #endif /* CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH */ extern const struct trace_print_flags pageflag_names[]; diff --git a/mm/rmap.c b/mm/rmap.c index 7a27a2b41802..da36f23ff7b0 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -605,6 +605,28 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio, } #ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH + +void fold_ubc(struct tlbflush_unmap_batch *dst, + struct tlbflush_unmap_batch *src) +{ + if (!src->flush_required) + return; + + /* + * Fold src to dst. + */ + arch_tlbbatch_fold(&dst->arch, &src->arch); + dst->writable = dst->writable || src->writable; + dst->flush_required = true; + + /* + * Reset src. + */ + arch_tlbbatch_clear(&src->arch); + src->flush_required = false; + src->writable = false; +} + /* * Flush TLB entries for recently unmapped pages from remote CPUs. It is * important if a PTE was dirty when it was unmapped that it's flushed @@ -614,7 +636,9 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio, void try_to_unmap_flush(void) { struct tlbflush_unmap_batch *tlb_ubc = ¤t->tlb_ubc; + struct tlbflush_unmap_batch *tlb_ubc_ro = ¤t->tlb_ubc_ro; + fold_ubc(tlb_ubc, tlb_ubc_ro); if (!tlb_ubc->flush_required) return; @@ -645,13 +669,18 @@ void try_to_unmap_flush_dirty(void) static void set_tlb_ubc_flush_pending(struct mm_struct *mm, pte_t pteval, unsigned long uaddr) { - struct tlbflush_unmap_batch *tlb_ubc = ¤t->tlb_ubc; + struct tlbflush_unmap_batch *tlb_ubc; int batch; bool writable = pte_dirty(pteval); if (!pte_accessible(mm, pteval)) return; + if (pte_write(pteval) || writable) + tlb_ubc = ¤t->tlb_ubc; + else + tlb_ubc = ¤t->tlb_ubc_ro; + arch_tlbbatch_add_pending(&tlb_ubc->arch, mm, uaddr); tlb_ubc->flush_required = true;