From patchwork Fri May 10 06:51:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13660933 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 04D0AC25B10 for ; Fri, 10 May 2024 06:52:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5E5886B0095; Fri, 10 May 2024 02:52:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 595A76B0096; Fri, 10 May 2024 02:52:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 45CE26B0098; Fri, 10 May 2024 02:52:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 2898E6B0095 for ; Fri, 10 May 2024 02:52:24 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id D3D9D16175A for ; Fri, 10 May 2024 06:52:23 +0000 (UTC) X-FDA: 82101567366.01.E045D99 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by imf02.hostedemail.com (Postfix) with ESMTP id D46278000A for ; Fri, 10 May 2024 06:52:19 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=none; spf=pass (imf02.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1715323942; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references; bh=lYfFMbAEM1F2cBEc4miPa+Jx1WoZRiG7k2wKNFJlSqA=; b=Il8wv5/t0ZjyRwEMsThVJk/cjhlW57T3oU1J2UXug3QZPF8OXIVv4+KzORqqsdxXCzPs3e 7Qvl12U+3gTBMmJQ/Nc7+ws1zWDZUFtr2t3af1UMZOC2JuP1DgRV+BAm9If+mFUFagpdsD fcDjDlAVGU1xJH6HHwXfv3nJMTjdp5A= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1715323942; a=rsa-sha256; cv=none; b=nH2SYK3Hk6Vl7sdJ9eIY/+kOFiXRYP+Ul94F7O9OQsoa42b4apgTqmNok6bfFMQjX3KnYL oyq8B1F4f3pdV0yko4wVn9STM6V0TB1R30IVo0q9esuWazAcD5pQGRegY8fmAyN2olpMhJ v0IqSf5mhmAv84ZFSd9ItSvGX/BaUpI= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=none; spf=pass (imf02.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com; dmarc=none X-AuditID: a67dfc5b-d6dff70000001748-c4-663dc4212db8 From: Byungchul Park To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: kernel_team@skhynix.com, akpm@linux-foundation.org, ying.huang@intel.com, vernhao@tencent.com, mgorman@techsingularity.net, hughd@google.com, willy@infradead.org, david@redhat.com, peterz@infradead.org, luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, rjgolo@gmail.com Subject: [PATCH v10 01/12] x86/tlb: add APIs manipulating tlb batch's arch data Date: Fri, 10 May 2024 15:51:55 +0900 Message-Id: <20240510065206.76078-2-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240510065206.76078-1-byungchul@sk.com> References: <20240510065206.76078-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrKLMWRmVeSWpSXmKPExsXC9ZZnoa7iEds0gyXrJC3mrF/DZvF5wz82 ixcb2hktvq7/xWzx9FMfi8XlXXPYLO6t+c9qcX7XWlaLHUv3MVlcOrCAyeJ47wEmi/n3PrNZ bN40ldni+JSpjBa/fwAVn5w1mcVBwON7ax+Lx85Zd9k9Fmwq9di8Qstj8Z6XTB6bVnWyeWz6 NInd4925c+weJ2b8ZvGYdzLQ4/2+q2weW3/ZeTROvcbm8XmTXABfFJdNSmpOZllqkb5dAlfG yf4NTAVvBCrOPJ3H3sA4h6+LkZNDQsBE4serPkYYe+WfVewgNpuAusSNGz+ZQWwRATOJg61/ wOLMAneZJA70s3UxcnAIC/hL3DnAChJmEVCV6Jn3H8zmFTCV2PrvAjPESHmJ1RsOgNmcQGM+ LJvBBGILAdV0PJgItJYLqOYzm8TS9d1MEA2SEgdX3GCZwMi7gJFhFaNQZl5ZbmJmjoleRmVe ZoVecn7uJkZg4C+r/RO9g/HTheBDjAIcjEo8vDs226QJsSaWFVfmHmKU4GBWEuGtqrFOE+JN SaysSi3Kjy8qzUktPsQozcGiJM5r9K08RUggPbEkNTs1tSC1CCbLxMEp1cBYca0w+sEKkTpO w3Vc07iOhc1evMD/jf9krhcvdJ6dYjsn1/Q5SKL19iLOJd2/7vRVn9Y9MjX27Lxiw6ux28W4 3r/iWt+alCYWtsv+6nwzP/WrP5u1zI05Hq2NlgqYcWE7k9bvoD72op3qNzY/4te+lKg179Ma vwsSXWky2anGG50Cbp+7VqGnxFKckWioxVxUnAgA+SxPCngCAAA= X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrDLMWRmVeSWpSXmKPExsXC5WfdrKt4xDbNoG8Wr8Wc9WvYLD5v+Mdm 8WJDO6PF1/W/mC2efupjsTg89ySrxeVdc9gs7q35z2pxftdaVosdS/cxWVw6sIDJ4njvASaL +fc+s1ls3jSV2eL4lKmMFr9/ABWfnDWZxUHQ43trH4vHzll32T0WbCr12LxCy2PxnpdMHptW dbJ5bPo0id3j3blz7B4nZvxm8Zh3MtDj/b6rbB6LX3xg8tj6y86jceo1No/Pm+QC+KO4bFJS czLLUov07RK4Mk72b2AqeCNQcebpPPYGxjl8XYycHBICJhIr/6xiB7HZBNQlbtz4yQxiiwiY SRxs/QMWZxa4yyRxoJ+ti5GDQ1jAX+LOAVaQMIuAqkTPvP9gNq+AqcTWfxeYIUbKS6zecADM 5gQa82HZDCYQWwiopuPBRMYJjFwLGBlWMYpk5pXlJmbmmOoVZ2dU5mVW6CXn525iBIbxsto/ E3cwfrnsfohRgINRiYd3x2abNCHWxLLiytxDjBIczEoivFU11mlCvCmJlVWpRfnxRaU5qcWH GKU5WJTEeb3CUxOEBNITS1KzU1MLUotgskwcnFINjAdD91Z9u/mx0/1X8I/jh7y7GsNDhR3v pGTVrdj47sfC1WpNtzXjCv4snDaldsakQ6vk3EJzTj7nygovndrk9n77XcHg95LLP7ce73r3 q+Lrmnsqh+YUPZ1Ut+ygosDqz/ueTDyhZBOzxK9vltMMLT2BJXWh4fHLivsPbr849+Idh61W OXbdS/iUWIozEg21mIuKEwE4VGM2XwIAAA== X-CFilter-Loop: Reflected X-Rspam-User: X-Stat-Signature: ihyfc3pocoogqhjqjaiqafb4rz7st3mk X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: D46278000A X-HE-Tag: 1715323939-752089 X-HE-Meta: U2FsdGVkX19DBOwHwzFQfwhqhcc9ceBKUtJGV49eAvjGzw2SMe46JsV7sBU/Bsel1UBnB8ATnkNJ+TfU3xzbs3wCESrOu6s7sawQ6kbYAdhd9/gqnbjKhPLF4zG5Nw8qoCGdUOXyEqioJNN1fa8aWHmQGXI6mbC02DyHoUzMDOUjdUZwYsT/Nudb7qd0/W4XeQgdgSKUYIRp/D4T4uigKpui4eVmcDfBPtfJMh1ZLv9MkaDBH1RlXI4zK10smIrxXcNT4WkRDyg5yFn3iupDPbv1a36C2a4ZeJii7pH2d2TzkmbZEJkfNzj9ZsFusR+FDKHl5wT0OCFTlG9NhE1cWc/Qr8Xu6aN9yuOAV+cq5nKm3GnVnP2YJ9o4fxzHNcCfveQ7nZXuPB8tR9xP8OMqs0QR9nOa/OhiM1xRc5kogswtVY2owd7C9Hv2rnhl0sXPrl78NNWEUkjojp9o3U4v51W5oOXsexkcnz3rtQxCvePvOo34nuNA46Xv4QUZqCODXNAewIosAgQBUzZ2fOH2CEiQhQO9OEdhxVL7vkraRlmCklXBhMT96Ryh75WFNGxIRPAwxi+aQZXAEnpAMM4hsFIC2PogNUvC9SRlsAOKXC2WMu/PMw927ptOuWz/5OPsWmIFmv+6Omvm3YBnc1DT9x1gkjGR12jj+Ra667xZc6OtiiUKIffA+ZtGhRqoq9yvb0kSvH32GE94CshTXSwCno8FxHH9HG28gYC0ZGte6oBFEnRUz7O52HCd8MjAwiylkhYx2VLP5WGAKQ9gT+nzK78K9tSOPkcaiCHe8CAE7LjuAl5x55DZMYgvWshWEn1jwWE/STV8VPZXHWvyXYoAFnNUsG6oKzLV/42Ln/KkowsWDngyTEUDVHb2rcP0IhuGsJhVgwKqk/pZ7uWfS9Tk5qnT9FxhzQJvjmDT1IDWY3mrbw1RHv+4kXMN66MPNrgLrtSPRHvEOcMm7uQVVPr U0UyPPUq 22edaWYgHfAXX7/FVWPNNEBsYkA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: A new mechanism, LUF(Lazy Unmap Flush), defers tlb flush until folios that have been unmapped and freed, eventually get allocated again. It's safe for folios that had been mapped read-only and were unmapped, since the contents of the folios wouldn't change while staying in pcp or buddy so we can still read the data through the stale tlb entries. This is a preparation for the mechanism that needs to recognize read-only tlb entries by separating tlb batch arch data into two, one is for read-only entries and the other is for writable ones, and merging those two when needed. It also optimizes tlb shootdown by skipping CPUs that have already performed tlb flush needed since. To support it, added APIs manipulating arch data for x86. Signed-off-by: Byungchul Park --- arch/x86/include/asm/tlbflush.h | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index 25726893c6f4..a14f77c5cdde 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -5,6 +5,7 @@ #include #include #include +#include #include #include @@ -293,6 +294,23 @@ static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm) extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch); +static inline void arch_tlbbatch_clear(struct arch_tlbflush_unmap_batch *batch) +{ + cpumask_clear(&batch->cpumask); +} + +static inline void arch_tlbbatch_fold(struct arch_tlbflush_unmap_batch *bdst, + struct arch_tlbflush_unmap_batch *bsrc) +{ + cpumask_or(&bdst->cpumask, &bdst->cpumask, &bsrc->cpumask); +} + +static inline bool arch_tlbbatch_done(struct arch_tlbflush_unmap_batch *bdst, + struct arch_tlbflush_unmap_batch *bsrc) +{ + return !cpumask_andnot(&bdst->cpumask, &bdst->cpumask, &bsrc->cpumask); +} + static inline bool pte_flags_need_flush(unsigned long oldflags, unsigned long newflags, bool ignore_access) From patchwork Fri May 10 06:51:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13660934 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F739C25B10 for ; Fri, 10 May 2024 06:52:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 524B46B009C; Fri, 10 May 2024 02:52:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 463666B009B; Fri, 10 May 2024 02:52:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1EF346B0099; Fri, 10 May 2024 02:52:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id F165D6B0098 for ; Fri, 10 May 2024 02:52:24 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 7106181779 for ; Fri, 10 May 2024 06:52:24 +0000 (UTC) X-FDA: 82101567408.05.152CB97 Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by imf30.hostedemail.com (Postfix) with ESMTP id DE23D80009 for ; Fri, 10 May 2024 06:52:19 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf30.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1715323942; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references; bh=CmKo+Qes+ApyBvy+wRevo+t3ePQZu2OoUHH/ub4twUM=; b=hIkaXHAPHXzPfEcnc18RAbFBC8PFJW1Xb1sOZz62ZSXw2qKEVPOxYxi4qo3RMTajLjsdvo E7XseNL3Nh/FjsW9DQoZryvPgszV3vwQBlDzpSXzNN2mmnzrOYKf577siwrUQqGEx6fyK8 k+LPuTmEBSWRsegYXSpOhSISS07wxtU= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf30.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1715323942; a=rsa-sha256; cv=none; b=l7jqldFmkJjDe/8uzaXeN956Yc8IQX2zN2zl1ybJOX0sdQVyY8h53HciN7m5UhesKUO9FN WEgE/Gdy9CB/JCJXecmJ5a+77V5fhxKOlV0+q6uYXWkFyGiW9Uzt7b30vh8MTDWhJSnFgs SxGhT+W6YVdi05gTEywSp7jeSbktf5s= X-AuditID: a67dfc5b-d6dff70000001748-c9-663dc4216273 From: Byungchul Park To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: kernel_team@skhynix.com, akpm@linux-foundation.org, ying.huang@intel.com, vernhao@tencent.com, mgorman@techsingularity.net, hughd@google.com, willy@infradead.org, david@redhat.com, peterz@infradead.org, luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, rjgolo@gmail.com Subject: [PATCH v10 02/12] arm64: tlbflush: add APIs manipulating tlb batch's arch data Date: Fri, 10 May 2024 15:51:56 +0900 Message-Id: <20240510065206.76078-3-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240510065206.76078-1-byungchul@sk.com> References: <20240510065206.76078-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrKLMWRmVeSWpSXmKPExsXC9ZZnka7iEds0g93/JS3mrF/DZvF5wz82 ixcb2hktvq7/xWzx9FMfi8XlXXPYLO6t+c9qcX7XWlaLHUv3MVlcOrCAyeJ47wEmi/n3PrNZ bN40ldni+JSpjBa/fwAVn5w1mcVBwON7ax+Lx85Zd9k9Fmwq9di8Qstj8Z6XTB6bVnWyeWz6 NInd4925c+weJ2b8ZvGYdzLQ4/2+q2weW3/ZeTROvcbm8XmTXABfFJdNSmpOZllqkb5dAlfG gYZ9LAWLeSuutqo2MP7m6mLk5JAQMJFYeO0ME4z9/8wkZhCbTUBd4saNn2C2iICZxMHWP+wg NrPAXSaJA/1sILawQLjE820/wHpZBFQl7r1tAYvzCphKLL89CWqmvMTqDQfA5nACzfmwbAZY XAiopuPBRMYuRi6gmvdsEit2d7JANEhKHFxxg2UCI+8CRoZVjEKZeWW5iZk5JnoZlXmZFXrJ +bmbGIGBv6z2T/QOxk8Xgg8xCnAwKvHw7thskybEmlhWXJl7iFGCg1lJhLeqxjpNiDclsbIq tSg/vqg0J7X4EKM0B4uSOK/Rt/IUIYH0xJLU7NTUgtQimCwTB6dUA6MM8zM/a6H6lXJ2/woC tGZMUhG1/XxQfvuGp203Jep17GRLzl2a+XZXtDOLuld4aOCljftcdjFeU0qRPVDWlvB+2zNR wxb7Y9ETyztmPZN5eJuRk/9XRP6/53r755smqZX2BGo/9tjHYbx4lsj8Jy9n5pnuvBq2UPpO Imfs5YPHt83er3Awa7kSS3FGoqEWc1FxIgDhDmkHeAIAAA== X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrDLMWRmVeSWpSXmKPExsXC5WfdrKt4xDbN4N4bfos569ewWXze8I/N 4sWGdkaLr+t/MVs8/dTHYnF47klWi8u75rBZ3Fvzn9Xi/K61rBY7lu5jsrh0YAGTxfHeA0wW 8+99ZrPYvGkqs8XxKVMZLX7/ACo+OWsyi4Ogx/fWPhaPnbPusnss2FTqsXmFlsfiPS+ZPDat 6mTz2PRpErvHu3Pn2D1OzPjN4jHvZKDH+31X2TwWv/jA5LH1l51H49RrbB6fN8kF8Edx2aSk 5mSWpRbp2yVwZRxo2MdSsJi34mqragPjb64uRk4OCQETif9nJjGD2GwC6hI3bvwEs0UEzCQO tv5hB7GZBe4ySRzoZwOxhQXCJZ5v+8EEYrMIqErce9sCFucVMJVYfnsSE8RMeYnVGw6AzeEE mvNh2QywuBBQTceDiYwTGLkWMDKsYhTJzCvLTczMMdUrzs6ozMus0EvOz93ECAzjZbV/Ju5g /HLZ/RCjAAejEg/vjs02aUKsiWXFlbmHGCU4mJVEeKtqrNOEeFMSK6tSi/Lji0pzUosPMUpz sCiJ83qFpyYICaQnlqRmp6YWpBbBZJk4OKUaGB+8UK0JK5gwZdb0TcdLI2xl9t4y2LzKyu1P 5zIdeYWcHVsWHb45UcZu8pkI3+epxvMrPS85p199qzXjcrHitZ7jk9a6Hniy/8iFeQcnC9a8 TrDSFrnpIXDuGMOWCULrjqks/8gt0Lx4wpEvH1ce+MHiWq8Rsn3pfa2lX5fEPdmdZn7U3yLl gNQGJZbijERDLeai4kQAKk35H18CAAA= X-CFilter-Loop: Reflected X-Rspam-User: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: DE23D80009 X-Stat-Signature: zhjf8ews1jit71pwhhmcio11etwebpos X-HE-Tag: 1715323939-924102 X-HE-Meta: U2FsdGVkX1/CLdRLtWSMX1qEA3Eld7iX7sthgcBOUpv7pSD3D700qtvlsXQ/mH3RCl0snsgntPo4bdb9RESs1LHPRgH4RkTu9vvJtERr/BcdzsWb8JhCArjvcxttp/IhSGOZn5n0KyS+nszQ6du+EhOfWk+SAJzEhZXc4x/Racr/BprXRC26UNJSoGrm2so7t2pAuRaeuoOeuny9AC5QE4M+zTK2HamUx1lGAl/6w8uTvj3nxkYRFG0N4aU8bzlpv8isBH9SYHXgcZNFIyRao2wqaZc4hS95F7wUF41o5D6jbrm21v0Jg0HL0YGqxfQpyIyt5H0G/AhBLicl1rc8ZCxuvfDU0+zjuMmTB8Nq3cn5WCv2xAP6iPZL21DMeZdi1KYTzrJCt93L6h490K809VDEac0eU8kZlgULSpJ2Rblg2RXJo4ehXQo2ALx7pwlaDUekZWgQg5VLeb+CAWDjwHI6DuSvQ/4Xz6vVZUb6Cc8zqrWJTtWdbuLcsQPx9T9+8AaZP6L4ByEcIG8bQAv9SkzoNA7Zxrk8qPBd2B5gL+Wo5cTkdGJSnQIAiyUGlk+ueLYcThyAFMNabw1ZegBM9aVDIP/D5H/w1saBLYo4cWXeiaIgqVnQLROk9ZQNObuK4RNqLhWND+F+f5dLquogM+oRvE34vjMkDtgBKrCfm5dPa9F5dzL0zkb/CIZUm7UlF15gDgpvDduBqVwpgarN22GEHAF+yle4p8dhrarbgrzWKYRdttTdwIcPPiiR8ULhbHECV8LehknPMYzXPBfm/SOD3soZGNyuxxlQsx/kepRjbnMsT8rbC2j8wy4IN8uZ8i+HJbz1m67S9Wex18UxoHmmOKyQxkjWIBj1vgOgwebiHT7h2+fi35BH36CO0N+URWqjuCwWtV6bl9zM+5wyU1euQyHbn6Mj8NFqoDZGsIpeB+vTX+X1Dov4CboKFOpJNGCklPmuXAauzJCDXEN L3w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: A new mechanism, LUF(Lazy Unmap Flush), defers tlb flush until folios that have been unmapped and freed, eventually get allocated again. It's safe for folios that had been mapped read only and were unmapped, since the contents of the folios don't change while staying in pcp or buddy so we can still read the data through the stale tlb entries. This is a preparation for the mechanism that requires to manipulate tlb batch's arch data. Even though arm64 does nothing for tlb things, arch with CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH should provide the APIs. Signed-off-by: Byungchul Park --- arch/arm64/include/asm/tlbflush.h | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index a75de2665d84..b8c7fbc1c68e 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -347,6 +347,24 @@ static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch) dsb(ish); } +static inline void arch_tlbbatch_clear(struct arch_tlbflush_unmap_batch *batch) +{ + /* nothing to do */ +} + +static inline void arch_tlbbatch_fold(struct arch_tlbflush_unmap_batch *bdst, + struct arch_tlbflush_unmap_batch *bsrc) +{ + /* nothing to do */ +} + +static inline bool arch_tlbbatch_done(struct arch_tlbflush_unmap_batch *bdst, + struct arch_tlbflush_unmap_batch *bsrc) +{ + /* Kernel can consider tlb batch always has been done. */ + return true; +} + /* * This is meant to avoid soft lock-ups on large TLB flushing ranges and not * necessarily a performance improvement. From patchwork Fri May 10 06:51:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13660935 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 317F6C25B10 for ; Fri, 10 May 2024 06:52:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8CC616B0096; Fri, 10 May 2024 02:52:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8560F6B0099; Fri, 10 May 2024 02:52:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 524056B009A; Fri, 10 May 2024 02:52:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 279A26B0096 for ; Fri, 10 May 2024 02:52:25 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id BF80FC04CE for ; Fri, 10 May 2024 06:52:24 +0000 (UTC) X-FDA: 82101567408.21.68AF495 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by imf17.hostedemail.com (Postfix) with ESMTP id E0F544001F for ; Fri, 10 May 2024 06:52:22 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf17.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1715323943; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references; bh=Ilpey0YHF3hm79X1vBIhnNe8zbnRWNG/60DUNWlxvcE=; b=GxWK+KpzcpoCa3rPMu9wTDSh5MsfmjHhQ8z+l8oVyMFwfTelkBT6PxsQb96bD5N5/6ZcWj Pbo54OpdFGkhdg55LT8aZ9QyJaBw35heWAibReVE236QolKWZEuzbRiGTSZq/1RCvdDJcZ VHLcEJ7zNYqS1nxGet7pucyJTPiYEsc= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf17.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1715323943; a=rsa-sha256; cv=none; b=fJbOcxQLPIYK2UwwSBucN1PuNrhQIGw8plVVA7BrdeUQPg18Zc0/vatK9FIFO6dO0HGUS9 bcA6k+IYL1n96D4xezS95xQtQURvm+RDd/ejX8jQ2Ry8yTECJzg5D5m/789vJUwaBgBMlb +OMAdxPbBDtaOPp398fSwm1+taSexAw= X-AuditID: a67dfc5b-d6dff70000001748-ce-663dc42165ab From: Byungchul Park To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: kernel_team@skhynix.com, akpm@linux-foundation.org, ying.huang@intel.com, vernhao@tencent.com, mgorman@techsingularity.net, hughd@google.com, willy@infradead.org, david@redhat.com, peterz@infradead.org, luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, rjgolo@gmail.com Subject: [PATCH v10 03/12] riscv, tlb: add APIs manipulating tlb batch's arch data Date: Fri, 10 May 2024 15:51:57 +0900 Message-Id: <20240510065206.76078-4-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240510065206.76078-1-byungchul@sk.com> References: <20240510065206.76078-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrGLMWRmVeSWpSXmKPExsXC9ZZnka7iEds0g3MP5SzmrF/DZvF5wz82 ixcb2hktvq7/xWzx9FMfi8XlXXPYLO6t+c9qcX7XWlaLHUv3MVlcOrCAyeJ47wEmi/n3PrNZ bN40ldni+JSpjBa/fwAVn5w1mcVBwON7ax+Lx85Zd9k9Fmwq9di8Qstj8Z6XTB6bVnWyeWz6 NInd4925c+weJ2b8ZvGYdzLQ4/2+q2weW3/ZeTROvcbm8XmTXABfFJdNSmpOZllqkb5dAlfG 8nX3GAs+ClQc/beKvYFxGV8XIyeHhICJxJ0bjaww9vzjvxhBbDYBdYkbN34yg9giAmYSB1v/ sIPYzAJ3mSQO9LOB2MICQRL/bn0Ei7MIqEpc7r4GFucVMJX40/SEHWKmvMTqDQfA5nACzfmw bAYTiC0EVNPxYCLQLi6gmvdsEu9n/4BqkJQ4uOIGywRG3gWMDKsYhTLzynITM3NM9DIq8zIr 9JLzczcxAkN/We2f6B2Mny4EH2IU4GBU4uHdsdkmTYg1say4MvcQowQHs5IIb1WNdZoQb0pi ZVVqUX58UWlOavEhRmkOFiVxXqNv5SlCAumJJanZqakFqUUwWSYOTqkGRsbqeR9+rt6wsrS+ tiZp0caE2BNhIRPnnlY+4OUhJXZ7w0UJq81fhadflTkZIGZm51Nx9Eb8Ro7fM9pXs5eyZKxR WNHvKfZZQ0qatfyDmfLZjonFJWdDPLeu00n7t7n5Tcyu2++Tq0OCj/SL3Sn8wWb35uyGROa4 GbsUDlQffya542OWCVNqoRJLcUaioRZzUXEiAN+vQop5AgAA X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrLLMWRmVeSWpSXmKPExsXC5WfdrKt4xDbN4OFhIYs569ewWXze8I/N 4sWGdkaLr+t/MVs8/dTHYnF47klWi8u75rBZ3Fvzn9Xi/K61rBY7lu5jsrh0YAGTxfHeA0wW 8+99ZrPYvGkqs8XxKVMZLX7/ACo+OWsyi4Ogx/fWPhaPnbPusnss2FTqsXmFlsfiPS+ZPDat 6mTz2PRpErvHu3Pn2D1OzPjN4jHvZKDH+31X2TwWv/jA5LH1l51H49RrbB6fN8kF8Edx2aSk 5mSWpRbp2yVwZSxfd4+x4KNAxdF/q9gbGJfxdTFyckgImEjMP/6LEcRmE1CXuHHjJzOILSJg JnGw9Q87iM0scJdJ4kA/G4gtLBAk8e/WR7A4i4CqxOXua2BxXgFTiT9NT9ghZspLrN5wAGwO J9CcD8tmMIHYQkA1HQ8mMk5g5FrAyLCKUSQzryw3MTPHVK84O6MyL7NCLzk/dxMjMJCX1f6Z uIPxy2X3Q4wCHIxKPLw7NtukCbEmlhVX5h5ilOBgVhLhraqxThPiTUmsrEotyo8vKs1JLT7E KM3BoiTO6xWemiAkkJ5YkpqdmlqQWgSTZeLglGpgNNC4p2bbwPvpx8oDdYE3Pv7J/r9wzkeF 5vlcp60juLkDmWYfTPFY//KYhsOnQM0N3x6K3X7t+YfxwIS7CbvVW3Wv6jTu9VmlKLE9tLtk 350fvw/svB+meUdhW9a/qclF0pI9zNZ6q3XNDhZtzItqOc3g7bxR9XztgenrLN6KTxN2NVp5 4T7jSyWW4oxEQy3mouJEAMaRpBZgAgAA X-CFilter-Loop: Reflected X-Rspamd-Server: rspam01 X-Stat-Signature: 8rqzcu4exutusueybw7y77d78ne84sem X-Rspam-User: X-Rspamd-Queue-Id: E0F544001F X-HE-Tag: 1715323942-720471 X-HE-Meta: U2FsdGVkX1+CzbIqzwW8IWktpZ4/ADfSZxQreVenCf8swY3JoCsGSIkvnHnAaJu5YiAXchSOgBoztX7uh5daKsVLmlkEtv7kPnIrOCyep0CzEzbnwOsYyxbkeMzL/xf5q/uzj6Ws+AVaUtm4gb+hQpXGePmhZGvDV6VUho27OJom3EGJEstJ/bRsjT5NVVX6GuVoKBn3octdSDRj+4DUa4r+8fOZhQBZUat9ZRWadKF1Wo4wTydTip0gvOgN0uBIJtC8GYBON4t3gz7VYk2mJJqIv2DKJWNdDwUM81dQmFKju53mFSoHUSmo0nkTTxJRkanGocdimp3m/7nohgY+gG/aHbZlWkovqopv5PZFPS+lhFg7mx771cvx+w1va7o0HHBehnYPwmx0eiw8x/9oPhsV6k3gup0NNGOOpRTNYp08O40CypcA945hUfba8J4qf/wMnn+ocKXKXAKtpua929wn9smZZjGmd12Iv2/jxLKtTMhVjWnbJ1iqr8ypQKvWB+Tpw3tOpM3d97iWTpvlGahUA0YWNFeIRn+affLuytqXFLaHbMTMEErxqEThWVJZuNW0qXa/Wkvmiij7MWhl3TjHiVhnasEZ4kupBQl5jvpwN/mNT1N/uOUMEoa3JIpDHNG42SYpHgA6QqGO02oDw8WwBV45n/2UBYmu9E/KdsIOm7sTcHP7Dex6r3uv+4v6vtyCsKuZRm2DVvl+xG/Kv6L0j0Dm0ZvGF8sgIKqubCLik5WKgrlVbd8TeYrkLMnHPfckSI2ME8KSy0C1yb53pucUKyG9BVOiXt2GMOsFSIKOXVyInjAxU9/VRH8DFq9BE1lKCBn3kw03DkB41j6GyTVHd4FS82JkXAkvRx2OG13i1riNB6UjDmTps9RgIQaltCdkqVFfD8BrCFaG3Ndrv8SoBCDPxDh8fprI8faLCHsYurK3aTvS+z4Z9KcfZX56xNG7F1JnnQk5KjpTD7Y eQHB2fGX QVAASZ/vEvFF60bYpMm/Jbblkaw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: A new mechanism, LUF(Lazy Unmap Flush), defers tlb flush until folios that have been unmapped and freed, eventually get allocated again. It's safe for folios that had been mapped read only and were unmapped, since the contents of the folios don't change while staying in pcp or buddy so we can still read the data through the stale tlb entries. This is a preparation for the mechanism that needs to recognize read-only tlb entries by separating tlb batch arch data into two, one is for read-only entries and the other is for writable ones, and merging those two when needed. It also optimizes tlb shootdown by skipping CPUs that have already performed tlb flush needed since. To support it, added APIs manipulating arch data for riscv. Signed-off-by: Byungchul Park --- arch/riscv/include/asm/tlbflush.h | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/arch/riscv/include/asm/tlbflush.h b/arch/riscv/include/asm/tlbflush.h index 4112cc8d1d69..480c082ccde3 100644 --- a/arch/riscv/include/asm/tlbflush.h +++ b/arch/riscv/include/asm/tlbflush.h @@ -8,6 +8,7 @@ #define _ASM_RISCV_TLBFLUSH_H #include +#include #include #include @@ -55,6 +56,26 @@ void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, void arch_flush_tlb_batched_pending(struct mm_struct *mm); void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch); +static inline void arch_tlbbatch_clear(struct arch_tlbflush_unmap_batch *batch) +{ + cpumask_clear(&batch->cpumask); + +} + +static inline void arch_tlbbatch_fold(struct arch_tlbflush_unmap_batch *bdst, + struct arch_tlbflush_unmap_batch *bsrc) +{ + cpumask_or(&bdst->cpumask, &bdst->cpumask, &bsrc->cpumask); + +} + +static inline bool arch_tlbbatch_done(struct arch_tlbflush_unmap_batch *bdst, + struct arch_tlbflush_unmap_batch *bsrc) +{ + return !cpumask_andnot(&bdst->cpumask, &bdst->cpumask, &bsrc->cpumask); + +} + #else /* CONFIG_SMP && CONFIG_MMU */ #define flush_tlb_all() local_flush_tlb_all() From patchwork Fri May 10 06:51:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13660936 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8A80CC25B75 for ; Fri, 10 May 2024 06:52:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A7CA06B0099; Fri, 10 May 2024 02:52:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9DBB56B009A; Fri, 10 May 2024 02:52:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 82F396B009B; Fri, 10 May 2024 02:52:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 6730A6B0099 for ; Fri, 10 May 2024 02:52:26 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 1EC748177D for ; Fri, 10 May 2024 06:52:26 +0000 (UTC) X-FDA: 82101567492.23.50BF901 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by imf02.hostedemail.com (Postfix) with ESMTP id 369B38000A for ; Fri, 10 May 2024 06:52:23 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=none; spf=pass (imf02.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1715323944; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references; bh=evCmjUh3KdBYN1DfbQBG2ARQXc+DAdC3ZcSVsI5RRA8=; b=oLgdmbG2q0WuDUWfKtkprkaD+GjFVy2xB/t5HuwHk6Z05DI5RzUMkUu63HtMQagZyzkhCo 3Yiaf4expT0P1ElF/JAC16n3WsTGuCfT2d1SPMvchyuaiI48ysoizMIByOsIj+8F77x/0g +lDoBeOyCzrZaAVfmcbE1IQ1wSZk2Vo= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1715323944; a=rsa-sha256; cv=none; b=thCkSwrzkBLH2Sfc2yx/rURYN1fIjci8viNPwbYrFuasozMZl3jcLuDm83x0nVBe08xYZS /ZqYdX0cmXEue2BU8pDov7ARDRBMMwn8EthfwJ4A4AabhrQMY8bgTAruwHG73zy1eiK2Rj XeHJ/ygCGYa8snUMdedHYdh0upNKYps= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=none; spf=pass (imf02.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com; dmarc=none X-AuditID: a67dfc5b-d6dff70000001748-d3-663dc4210637 From: Byungchul Park To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: kernel_team@skhynix.com, akpm@linux-foundation.org, ying.huang@intel.com, vernhao@tencent.com, mgorman@techsingularity.net, hughd@google.com, willy@infradead.org, david@redhat.com, peterz@infradead.org, luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, rjgolo@gmail.com Subject: [PATCH v10 04/12] x86/tlb, riscv/tlb, mm/rmap: separate arch_tlbbatch_clear() out of arch_tlbbatch_flush() Date: Fri, 10 May 2024 15:51:58 +0900 Message-Id: <20240510065206.76078-5-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240510065206.76078-1-byungchul@sk.com> References: <20240510065206.76078-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrGLMWRmVeSWpSXmKPExsXC9ZZnka7iEds0gzXnFCzmrF/DZvF5wz82 ixcb2hktvq7/xWzx9FMfi8XlXXPYLO6t+c9qcX7XWlaLHUv3MVlcOrCAyeJ47wEmi/n3PrNZ bN40ldni+JSpjBa/fwAVn5w1mcVBwON7ax+Lx85Zd9k9Fmwq9di8Qstj8Z6XTB6bVnWyeWz6 NInd4925c+weJ2b8ZvGYdzLQ4/2+q2weW3/ZeTROvcbm8XmTXABfFJdNSmpOZllqkb5dAlfG o18T2Avu81WsmbiHtYFxKk8XIyeHhICJxOvGt2ww9q+Wp6wgNpuAusSNGz+ZQWwRATOJg61/ 2EFsZoG7TBIH+sHqhQWKJdZsPQBmswioSlzq7GYCsXkFTCWWNz1ghJgpL7F6wwGwOZxAcz4s mwFWIwRU0/FgIlANF1DNezaJnvPX2SEaJCUOrrjBMoGRdwEjwypGocy8stzEzBwTvYzKvMwK veT83E2MwNBfVvsnegfjpwvBhxgFOBiVeHh3bLZJE2JNLCuuzD3EKMHBrCTCW1VjnSbEm5JY WZValB9fVJqTWnyIUZqDRUmc1+hbeYqQQHpiSWp2ampBahFMlomDU6qB0fvfUYM0u7jVeiuX Vz9Ise7smlok11CxukSPyUt5XnOvldRbdU6rHa7F137uiFg+zV/U+qVv4xYRgzzVdzF6DpcV Q70mnvrQum7Xo0Bu6Tal/6rVP8+23RE5bmbn9zbo1f32n7166oyrvqdn7Frow7LQ3o55qvtn L76Nflxrv+XscJ2zZ1eaEktxRqKhFnNRcSIAet1+x3kCAAA= X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrLLMWRmVeSWpSXmKPExsXC5WfdrKt4xDbN4Ol7EYs569ewWXze8I/N 4sWGdkaLr+t/MVs8/dTHYnF47klWi8u75rBZ3Fvzn9Xi/K61rBY7lu5jsrh0YAGTxfHeA0wW 8+99ZrPYvGkqs8XxKVMZLX7/ACo+OWsyi4Ogx/fWPhaPnbPusnss2FTqsXmFlsfiPS+ZPDat 6mTz2PRpErvHu3Pn2D1OzPjN4jHvZKDH+31X2TwWv/jA5LH1l51H49RrbB6fN8kF8Edx2aSk 5mSWpRbp2yVwZTz6NYG94D5fxZqJe1gbGKfydDFyckgImEj8annKCmKzCahL3LjxkxnEFhEw kzjY+ocdxGYWuMskcaCfDcQWFiiWWLP1AJjNIqAqcamzmwnE5hUwlVje9IARYqa8xOoNB8Dm cALN+bBsBliNEFBNx4OJjBMYuRYwMqxiFMnMK8tNzMwx1SvOzqjMy6zQS87P3cQIDORltX8m 7mD8ctn9EKMAB6MSD++OzTZpQqyJZcWVuYcYJTiYlUR4q2qs04R4UxIrq1KL8uOLSnNSiw8x SnOwKInzeoWnJggJpCeWpGanphakFsFkmTg4pRoYrUq/Se5wY/gcFDRDdW5RKZf8FrlQtXUc yZofHI8zzp5mLR3hvE/2nMmeLsUfOzT8H//51+skt/g0pzv/sWOKgkvfFzd8kd7p2/znnGfP NSPrXdzOJdd1wn8ZzS56sJP/hfbjlauebfN8+OWCX/I3gabz88/KTw+7ua9VgrGhvieqP3Ne 1UlXJZbijERDLeai4kQAZoPofWACAAA= X-CFilter-Loop: Reflected X-Rspam-User: X-Stat-Signature: dp13dtu78whuparzb3z1a1xo8dayr7r4 X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 369B38000A X-HE-Tag: 1715323943-523185 X-HE-Meta: U2FsdGVkX1/DQZpBJjHo5zCrBhMBeNYTsRng6hNuPw9rcc0uTGh1qHAd1TbIHkWw5yBzD+upgeTiIanuRi93icRyh9uVVFWk1vkaMpVxT19MaUAzQPZsvvPcrCulWs/5Xqw/W3fNlbMqE4FKChcKFK933fQTf55p1Gt5qWU1KkLFVmaLx5aq/XpAkG9j/hZPswBGgjCG+UZGNHv8nyWe7DSnVkXjvaCH5+Fn9kpfMDvKEp6B0YVOwcriCEIzxTpZev0kqY/o/vtKGL3wN5PcxJpRKIo7hWsyXE9PHKhvQRZUUSznTVtWec+fJ9bSkZaqtksPQ/m9M+rbiXujXVCl2PU6KKhQTd3FjhfhDULcPPYu7ZLHVweMdsX5OIR0Ti19eYpszWKiYsyru04y1HyGrlGr+JdyfDinhopiUGgIU+4VRn+ujCDs/FZw2Yq90pNcMIaCpQiB+Lrn+s9O8Y7Q/xfb+HYbpTWnBz6WMdS0JX6eWa6BrLYCOX73ueRmcRJ1d+D6OrkJumw/ZzTSTe3z97COnDrN1fSHyjvNHBNyGVo9Eg3BbHMjuM2Q2PecGuUzchOsj5YlOu81WqZYs8wrgqHMQiv2jOIOl5PH23zQawg66lRw8Oct66p+Weps9p6yrcB8afcAeSMmUH3nW12XV8nJRKgSsMkFkFxCoc1/XYcMmU4QuJN5rbt+KRHoNgmK6QJIhnfQpLak8TOrUZrFYOpPYkvIpl+IA5ArV4lj5oxWo+Geje2dXFQ+J5LTu/vy2D7qAzQFstGo/BBwJC+JWUcovOc4PmMGjvTnVOMk05zsFV7aRM+3pZwxdh4gSoCdH++nuHmwAFMsZI/VFDetWnEsyla2nAg59pKLyun5IbDzAKoLCRCUAOTbLdxUtr1ayip5fBZuYbBJ4fKk5mURFmHWSQNmPHb0BsftrIiFtDICWf0QIff9G5hI3mF0JICIPOspV1hl++SJBwV47hP l3FDG3cp ztZ3lfFpGXgbjrT+kEs20vGPFdAsUOYQ6Cao7CQGzOtEuH1WTo9uhC3A89vxxYNb4VeJBClSuC9q66zMONAZUP0ZELw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: A new mechanism, LUF(Lazy Unmap Flush), defers tlb flush until folios that have been unmapped and freed, eventually get allocated again. It's safe for folios that had been mapped read only and were unmapped, since the contents of the folios don't change while staying in pcp or buddy so we can still read the data through the stale tlb entries. This is a preparation for the mechanism that requires to avoid redundant tlb flush by manipulating tlb batch's arch data. To achieve that, we need to separate the part clearing the tlb batch's arch data out of arch_tlbbatch_flush(). Signed-off-by: Byungchul Park --- arch/riscv/mm/tlbflush.c | 1 - arch/x86/mm/tlb.c | 2 -- mm/rmap.c | 1 + 3 files changed, 1 insertion(+), 3 deletions(-) diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c index 07d743f87b3f..9cbd27148357 100644 --- a/arch/riscv/mm/tlbflush.c +++ b/arch/riscv/mm/tlbflush.c @@ -234,5 +234,4 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch) { __flush_tlb_range(&batch->cpumask, FLUSH_TLB_NO_ASID, 0, FLUSH_TLB_MAX_SIZE, PAGE_SIZE); - cpumask_clear(&batch->cpumask); } diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 44ac64f3a047..24bce69222cd 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -1265,8 +1265,6 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch) local_irq_enable(); } - cpumask_clear(&batch->cpumask); - put_flush_tlb_info(); put_cpu(); } diff --git a/mm/rmap.c b/mm/rmap.c index 2608c40dffad..cf8a99a49aef 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -649,6 +649,7 @@ void try_to_unmap_flush(void) return; arch_tlbbatch_flush(&tlb_ubc->arch); + arch_tlbbatch_clear(&tlb_ubc->arch); tlb_ubc->flush_required = false; tlb_ubc->writable = false; } From patchwork Fri May 10 06:51:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13660937 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 693C8C25B4F for ; Fri, 10 May 2024 06:52:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2B4846B009A; Fri, 10 May 2024 02:52:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 261D56B009B; Fri, 10 May 2024 02:52:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0DD826B009E; Fri, 10 May 2024 02:52:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id DDD406B009A for ; Fri, 10 May 2024 02:52:26 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 8F6ED161761 for ; Fri, 10 May 2024 06:52:26 +0000 (UTC) X-FDA: 82101567492.27.DFE3CE8 Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by imf30.hostedemail.com (Postfix) with ESMTP id C258580008 for ; Fri, 10 May 2024 06:52:24 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf30.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1715323945; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references; bh=Riu8nANP1gQmwqYLhEmoeVBEqJ6gKrJm9jcEuXrAJxE=; b=6VzrZWJWVWofpog/9JSVMvxOxADZn2fikNBOY6R+Dciol2E71pBP7pj2SAayoJ3JMMAhO4 cH1Vnz+dux3yvzRq/ZuXiFgm5n4JIraLZIjjlcnJ7Tf//foli0tiPeOU7trv9eBcaovcCf mUXS+vneDJ6Vi9No9J21RTQN7iqh2os= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf30.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1715323945; a=rsa-sha256; cv=none; b=qi6BryeBchj4TqJr0LKCNK86gMXoR7HlApBlXgs+H99zgCVZPsScDjG2nPhIyItlx4U+vi c0Qx327AVKLya3jLLcNXp67sGRhuiv7UXutAPLNoRy7lowYIUGtaJ+9Grj/BKr6ZhVh3j7 BQYtBty7n7Y2V9UeztmRDsYbE7BNo9U= X-AuditID: a67dfc5b-d6dff70000001748-d8-663dc421ab13 From: Byungchul Park To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: kernel_team@skhynix.com, akpm@linux-foundation.org, ying.huang@intel.com, vernhao@tencent.com, mgorman@techsingularity.net, hughd@google.com, willy@infradead.org, david@redhat.com, peterz@infradead.org, luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, rjgolo@gmail.com Subject: [PATCH v10 05/12] mm: buddy: make room for a new variable, ugen, in struct page Date: Fri, 10 May 2024 15:51:59 +0900 Message-Id: <20240510065206.76078-6-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240510065206.76078-1-byungchul@sk.com> References: <20240510065206.76078-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrELMWRmVeSWpSXmKPExsXC9ZZnoa7iEds0g9XTlCzmrF/DZvF5wz82 ixcb2hktvq7/xWzx9FMfi8XlXXPYLO6t+c9qcX7XWlaLHUv3MVlcOrCAyeJ47wEmi/n3PrNZ bN40ldni+JSpjBa/fwAVn5w1mcVBwON7ax+Lx85Zd9k9Fmwq9di8Qstj8Z6XTB6bVnWyeWz6 NInd4925c+weJ2b8ZvGYdzLQ4/2+q2weW3/ZeTROvcbm8XmTXABfFJdNSmpOZllqkb5dAlfG gm8P2QquqlQc7pzE0sDYJtvFyMkhIWAi8X3GPHYY+/m3CawgNpuAusSNGz+ZQWwRATOJg61/ wGqYBe4ySRzoZwOxhQUiJO69WsjUxcjOwSKgKnHYvouRg4NXwFTidAMTxEB5idUbDoAN4QQa 8mHZDLC4EFBJx4OJjBA1n9kkWlcGQ9iSEgdX3GCZwMi7gJFhFaNQZl5ZbmJmjoleRmVeZoVe cn7uJkZgyC+r/RO9g/HTheBDjAIcjEo8vDs226QJsSaWFVfmHmKU4GBWEuGtqrFOE+JNSays Si3Kjy8qzUktPsQozcGiJM5r9K08RUggPbEkNTs1tSC1CCbLxMEp1cDoWPRIju/nDkU3n6tZ C+4zX1a1u+z/zbMq9umZiScz50g8msSxyOA524uMjwueF8Yvm9cTIGBw+U2ZTTlr2Xd9tt+T m466iezMjWgu2T1747+M3NOT8+V2ZU2L37W550fDqwSX2TXbMiasesS84+jT3OJviz5N52xf V+H6Q2di1fMNc1t+h5dkKLEUZyQaajEXFScCAFsp97R1AgAA X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrFLMWRmVeSWpSXmKPExsXC5WfdrKt4xDbN4E27hMWc9WvYLD5v+Mdm 8WJDO6PF1/W/mC2efupjsTg89ySrxeVdc9gs7q35z2pxftdaVosdS/cxWVw6sIDJ4njvASaL +fc+s1ls3jSV2eL4lKmMFr9/ABWfnDWZxUHQ43trH4vHzll32T0WbCr12LxCy2PxnpdMHptW dbJ5bPo0id3j3blz7B4nZvxm8Zh3MtDj/b6rbB6LX3xg8tj6y86jceo1No/Pm+QC+KO4bFJS czLLUov07RK4MhZ8e8hWcFWl4nDnJJYGxjbZLkZODgkBE4nn3yawgthsAuoSN278ZAaxRQTM JA62/mEHsZkF7jJJHOhnA7GFBSIk7r1ayNTFyM7BIqAqcdi+i5GDg1fAVOJ0AxPEQHmJ1RsO gA3hBBryYdkMsLgQUEnHg4mMExi5FjAyrGIUycwry03MzDHVK87OqMzLrNBLzs/dxAgM4WW1 fybuYPxy2f0QowAHoxIP747NNmlCrIllxZW5hxglOJiVRHiraqzThHhTEiurUovy44tKc1KL DzFKc7AoifN6hacmCAmkJ5akZqemFqQWwWSZODilGhgltu8Mvux5OU+RNT9R8EXD9N7dcw/o ce3xzUlPt2xb+Ch41YPtuZa7j16ynLjiySLmwzELlzuvX1LpaiOSYqn1syizrXC5fvjqS1cb /zfeYJ/KbT7TfmPLzXtFh59x9W+41rngjMC5rU/S/af//W+pf33G41pR7ln/DpjnsJ7j4DaV Dq958f+aEktxRqKhFnNRcSIAGjUyt10CAAA= X-CFilter-Loop: Reflected X-Rspam-User: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: C258580008 X-Stat-Signature: ortb7ru1zb7ik69isn4pwn4zm7xck7mx X-HE-Tag: 1715323944-557167 X-HE-Meta: U2FsdGVkX18ALJTp4O1FzUtIQ3Qk3nGDqT5A/Z5ct6CzCdvQO87hf9YaPYNAUuaYnVR2W64MOOvymvw31PP44AFK1MFnxKvo/VBQmlTuBWFM61hKaMqIR52mqjZzr/HzNtxmsztxt5+uia81QxswaU+NJ70MD0D4OY7m9d2mlmebHMVzXdWSyTfcLKC+IYArjGj7XbHm9WTcWdErRzhAcEKvp1y0YSBPDgd1yOqpOFioecYpeNyMAb5pZU22g2RigYiJ3Cts+WQStFnYIjP7qopGHZJRaZkh9iuhl27aN2mO1GADzkxN/sBqZnu6MZq6+nmah/i0GitqjgY1q9tx6fK2NG+yk6jdIlKuFxHNep3K+7sbe6ygPur61KWEdoYSQNqDfx6fccEP/shccxKCu02ST3vLC7ynSxV9yhfQCKUpwNyXlkBgoQPHDKDnIO+E27Qq5mb4sESO1Q1qdEQ+6XcpOsCuM0wmGsAtOI5KAQwEMStJc6I368JuQNaKFx97gdP2t09cRclLZqRzDLfzND3FIUXOlMCaiQJiV568SOTIiWO7ZE/defTRKJ5sRjd7DEnL8jroTI3uyQUSqCFmRhjrsWVyjmfL88G65CTeIFDAFSi0fXwM/JWY9VXCsD/G/XEOjEcSQtkk1Ipd8HGKQZUvY/2aGkfZBRPHraAIiYODaPSou9ReTkJJJSU6WSRYnTBZGdD8XX7TqUH13OY2Ysijc4RKMQwdwrCjfZ1mHRz6mSe94YRlNy3baeE9QQgtQlWsSVtoIs/q5ut5nynhhfLKa8m6grlM+s0QuSPZBEwXFiMEuQoXbkXmfXlB88UB2qUdMK9v63m6zE+VY4d4udDY2eTCQiaiuHeJ7X4UB7eCcIGaJsNgebAkB+XPrZxkmgfbP3kvsWV1Pt9g6cVwxTMLrk1vwaWSqJLnZEpST4Z6tvLNbk/ZOAlUH8T9m5t7SXvBuhzUf5u3cCxHRvE yWg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Functionally, no change. This is a preparation for luf mechanism that tracks need of tlb flush for each page residing in buddy, using a generation number in struct page. Fortunately, since the private field in struct page is used only to store page order in buddy, ranging from 0 to MAX_PAGE_ORDER, that can be covered with unsigned short int. So splitted it into two smaller ones, order and ugen, so that the both can be used in buddy at the same time. Signed-off-by: Byungchul Park --- include/linux/mm_types.h | 40 +++++++++++++++++++++++++++++++++------- mm/internal.h | 4 ++-- mm/page_alloc.c | 13 ++++++++----- 3 files changed, 43 insertions(+), 14 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index db0adf5721cc..cd4ec0d10ffb 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -108,13 +108,25 @@ struct page { pgoff_t index; /* Our offset within mapping. */ unsigned long share; /* share count for fsdax */ }; - /** - * @private: Mapping-private opaque data. - * Usually used for buffer_heads if PagePrivate. - * Used for swp_entry_t if PageSwapCache. - * Indicates order in the buddy system if PageBuddy. - */ - unsigned long private; + union { + /** + * @private: Mapping-private opaque data. + * Usually used for buffer_heads if PagePrivate. + * Used for swp_entry_t if PageSwapCache. + */ + unsigned long private; + struct { + /* + * Indicates order in the buddy system if PageBuddy. + */ + unsigned short int order; + /* + * Tracks need of tlb flush used by luf, + * which stands for lazy unmap flush. + */ + unsigned short int ugen; + }; + }; }; struct { /* page_pool used by netstack */ /** @@ -521,6 +533,20 @@ static inline void set_page_private(struct page *page, unsigned long private) page->private = private; } +#define page_buddy_order(page) ((page)->order) + +static inline void set_page_buddy_order(struct page *page, unsigned int order) +{ + page->order = (unsigned short int)order; +} + +#define page_buddy_ugen(page) ((page)->ugen) + +static inline void set_page_buddy_ugen(struct page *page, unsigned short int ugen) +{ + page->ugen = ugen; +} + static inline void *folio_get_private(struct folio *folio) { return folio->private; diff --git a/mm/internal.h b/mm/internal.h index c6483f73ec13..eb9c7d8650fc 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -453,7 +453,7 @@ struct alloc_context { static inline unsigned int buddy_order(struct page *page) { /* PageBuddy() must be checked by the caller */ - return page_private(page); + return page_buddy_order(page); } /* @@ -467,7 +467,7 @@ static inline unsigned int buddy_order(struct page *page) * times, potentially observing different values in the tests and the actual * use of the result. */ -#define buddy_order_unsafe(page) READ_ONCE(page_private(page)) +#define buddy_order_unsafe(page) READ_ONCE(page_buddy_order(page)) /* * This function checks whether a page is free && is the buddy diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 33d4a1be927b..917b22b429d1 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -565,9 +565,12 @@ void prep_compound_page(struct page *page, unsigned int order) prep_compound_head(page, order); } -static inline void set_buddy_order(struct page *page, unsigned int order) +static inline void set_buddy_order_ugen(struct page *page, + unsigned int order, + unsigned short int ugen) { - set_page_private(page, order); + set_page_buddy_order(page, order); + set_page_buddy_ugen(page, order); __SetPageBuddy(page); } @@ -834,7 +837,7 @@ static inline void __free_one_page(struct page *page, } done_merging: - set_buddy_order(page, order); + set_buddy_order_ugen(page, order, 0); if (fpi_flags & FPI_TO_TAIL) to_tail = true; @@ -1344,7 +1347,7 @@ static inline void expand(struct zone *zone, struct page *page, continue; __add_to_free_list(&page[size], zone, high, migratetype, false); - set_buddy_order(&page[size], high); + set_buddy_order_ugen(&page[size], high, 0); nr_added += size; } account_freepages(zone, nr_added, migratetype); @@ -6802,7 +6805,7 @@ static void break_down_buddy_pages(struct zone *zone, struct page *page, continue; add_to_free_list(current_buddy, zone, high, migratetype, false); - set_buddy_order(current_buddy, high); + set_buddy_order_ugen(current_buddy, high, 0); } } From patchwork Fri May 10 06:52:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13660939 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E80B6C25B10 for ; Fri, 10 May 2024 06:52:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C00556B009D; Fri, 10 May 2024 02:52:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BB04C6B009E; Fri, 10 May 2024 02:52:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A51226B009F; Fri, 10 May 2024 02:52:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 81FF16B009D for ; Fri, 10 May 2024 02:52:27 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 4108C1A02CD for ; Fri, 10 May 2024 06:52:27 +0000 (UTC) X-FDA: 82101567534.05.55C72E1 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by imf17.hostedemail.com (Postfix) with ESMTP id 2633840014 for ; Fri, 10 May 2024 06:52:24 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf17.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1715323945; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references; bh=uBxbUsmXvbfxNg91RYfx1ML3CQW0y+GUAqHMQwvODHE=; b=ClbJ7NUjulBii7Gf+KvcAgK5oD4Q36TLw2NvKx9NOffb7gkvAnFdZIKgGr/iwJUvH+eheD ALQ0yUees0nYSdEHmTJlmApsTmYYBu5ari7pA812M6aYl5p8U6GuSXgrilia+hmXiC+Pn8 Cfdfz2T3QQfBYbpZ4x40ocLLHTWBRJs= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf17.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1715323945; a=rsa-sha256; cv=none; b=763FTQBt3VTDtdGKi1behiJiYmHx9AsAIqgKc67ge6aAwZiaZX/utI0EHI8zwZHFb8C+J7 L2KiSWhzTCTQnMXEnK2kZETSADZcYLjRnH0d3RP0GxXjTPX23i8TYiEgAbFU1UJ8XJbOlx pflixwXQrUqtMDfyCiTd2OwiGvwO8uQ= X-AuditID: a67dfc5b-d6dff70000001748-dd-663dc4212a6f From: Byungchul Park To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: kernel_team@skhynix.com, akpm@linux-foundation.org, ying.huang@intel.com, vernhao@tencent.com, mgorman@techsingularity.net, hughd@google.com, willy@infradead.org, david@redhat.com, peterz@infradead.org, luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, rjgolo@gmail.com Subject: [PATCH v10 06/12] mm: add folio_put_ugen() to deliver unmap generation number to pcp or buddy Date: Fri, 10 May 2024 15:52:00 +0900 Message-Id: <20240510065206.76078-7-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240510065206.76078-1-byungchul@sk.com> References: <20240510065206.76078-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrBLMWRmVeSWpSXmKPExsXC9ZZnka7iEds0g0N71C3mrF/DZvF5wz82 ixcb2hktvq7/xWzx9FMfi8XlXXPYLO6t+c9qcX7XWlaLHUv3MVlcOrCAyeJ47wEmi/n3PrNZ bN40ldni+JSpjBa/fwAVn5w1mcVBwON7ax+Lx85Zd9k9Fmwq9di8Qstj8Z6XTB6bVnWyeWz6 NInd4925c+weJ2b8ZvGYdzLQ4/2+q2weW3/ZeTROvcbm8XmTXABfFJdNSmpOZllqkb5dAlfG 5dkdbAVndzFW/J97mqWB8etUxi5GTg4JAROJF62P2GDsK1M7WEFsNgF1iRs3fjKD2CICZhIH W/+wg9jMAneZJA70g9ULC6RJbH13FizOIqAq8WzxZTCbV8BU4se0W8wQM+UlVm84AGZzAs35 sGwGE4gtBFTT8WAi0A1cQDXv2ST+fp3CCtEgKXFwxQ2WCYy8CxgZVjEKZeaV5SZm5pjoZVTm ZVboJefnbmIEhv+y2j/ROxg/XQg+xCjAwajEw7tjs02aEGtiWXFl7iFGCQ5mJRHeqhrrNCHe lMTKqtSi/Pii0pzU4kOM0hwsSuK8Rt/KU4QE0hNLUrNTUwtSi2CyTBycUg2MrDXhsxes6LiV 03Hkp41/2vxTn+8tlNQ/0/xZbc/Mwsd8LhK57bm50vqX+zSnZt1YPGHNuYs5O21KOuatNlsV Xfn9STTbqx85MkEHRflS428cvZW8jqnnyfLVsx/86vL+y3H07qtTQQ+uVj5MNP5vpVESaqrj 3iMY/fxQnkxVj7P7v6uWdeJNSizFGYmGWsxFxYkAOEIO1XsCAAA= X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrPLMWRmVeSWpSXmKPExsXC5WfdrKt4xDbNYN0VaYs569ewWXze8I/N 4sWGdkaLr+t/MVs8/dTHYnF47klWi8u75rBZ3Fvzn9Xi/K61rBY7lu5jsrh0YAGTxfHeA0wW 8+99ZrPYvGkqs8XxKVMZLX7/ACo+OWsyi4Ogx/fWPhaPnbPusnss2FTqsXmFlsfiPS+ZPDat 6mTz2PRpErvHu3Pn2D1OzPjN4jHvZKDH+31X2TwWv/jA5LH1l51H49RrbB6fN8kF8Edx2aSk 5mSWpRbp2yVwZVye3cFWcHYXY8X/uadZGhi/TmXsYuTkkBAwkbgytYMVxGYTUJe4ceMnM4gt ImAmcbD1DzuIzSxwl0niQD8biC0skCax9d1ZsDiLgKrEs8WXwWxeAVOJH9NuMUPMlJdYveEA mM0JNOfDshlMILYQUE3Hg4mMExi5FjAyrGIUycwry03MzDHVK87OqMzLrNBLzs/dxAgM5mW1 fybuYPxy2f0QowAHoxIP747NNmlCrIllxZW5hxglOJiVRHiraqzThHhTEiurUovy44tKc1KL DzFKc7AoifN6hacmCAmkJ5akZqemFqQWwWSZODilGhijt9x4o3vqQ/nR/+baJ7m6Q+7WGPSd Vi+yqL7nIKCWmr+u7+OnVK/9W2vywrc4feB7wcd/cfGfq59m9r2JFSntC79RKfx/33zGuHB/ 00f/WT6zzVF1eNBvkmn1d4pU82rHXiGWtqBca9bb4v5CmZKbtzSa1333ONx966/V281t7d5f MiKEKpVYijMSDbWYi4oTATFQ56JiAgAA X-CFilter-Loop: Reflected X-Rspamd-Server: rspam01 X-Stat-Signature: c3i9zxh1zou1fotwfy6im8ngci5wco1i X-Rspam-User: X-Rspamd-Queue-Id: 2633840014 X-HE-Tag: 1715323944-924801 X-HE-Meta: U2FsdGVkX19bU0ixwIH2neNHAXThHNESfQLiRklq6ldWn/fhpL/wFZpL6f0bCMg+gm8s53CHPnzyeRFK2aTzd7KMR02pZpksypNNASo1wXopl5cew2yOIPeLiD5DrmXNKicXP5Lsgm5KidU9E2UL0c7xk4lUEp+34+2PDmGv6wjcWj4t58Fwk3JhRw8eoSF+RoXd584ftP+m7pLRyOsixe489foK7YfI3p11hXoOPRMcHBefIKtzELF4lmaNcVt4nEqmStX1IufhNDywt587L8FbJ0yU7k0GjstRw3DqtWLdYCQLdvWIKTetu5PcogqRut25qbr8CQcob7QhcHEqOvtfJVHxaU3DTHbIWvjPe+TW5/1EMHItq5E4GZFnMxfIOZimUOytBGIBov7rtAjobQoHuyqpx9dguUjcrbIXLudppA0OS2caYPWaj3YYxshTrlZm/JxH34FlPefgNG75voRiklbqZvsBpAbX8/VgmOdm9sCPfTmS/hZ7zU5FtIG+XzhabiDsu8yX5rpOUqiT4evPeDgfqheE1ubm1aRIvBu1SOTGEJPnwedBSxwIyOsVVbF2fMzZk+m0LsmG5xA3sIZr6EnYAloMeKXJDyc+mEcC6R1P95uN4BKPf6N6+o6Hbt5BLg5he3dc2GgRQrNyW2KZQAi7QfreOJDRQgF0+5NrHHUXLp5KZnPTwitfMuPfd7TcfOVwKZFzvkU61UNNMGV7AkypTrcLCC0A5dt/+037fLFTFbf91Vpyy0EaTPqWB9njTvdqp1Ps13J7SjQU1DegQk9QQXyBt9sr4vozgHqCoX4Yi8X6A2rkwB7lXtdXw3DIRuHw0qakbhvT2Bfbp213YdGdxtB3yWqA+f7tgnWcJZBwJQxlfZvF1vqUl/DqnkrKrwUTSTHJGmIaVkqFNkNPAmQe0Z93ws2x/CXCKWW91Wqy3zwzNd4+snzPzh/7FYgl+vamP18bvlIn1/a picUloE5 A2tlGnoFJhzG7PKPLsr9sgPbgCTFFE1Ud3KjC X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Introduced a new API, folio_put_ugen(), to deliver unmap generation number to pcp or buddy that will be used by luf mechanism to track need of tlb flush for each page residing in pcp or buddy. For now, the delivery should work for the following call path that is of releasing source folios during migration: folio_put_ugen() __folio_put_ugen() free_unref_page() free_unref_page_commit() free_one_page() __free_one_page() The generation number should be handed over properly when pages travel between pcp and buddy, and must do necessary handling on exit from pcp or buddy. This patch doesn't include actual body for tlb flush on the exit, which will be filled by the main patch of luf mechanism. Signed-off-by: Byungchul Park --- include/linux/mm.h | 22 +++++++ include/linux/sched.h | 1 + mm/compaction.c | 10 +++ mm/internal.h | 70 +++++++++++++++++++- mm/page_alloc.c | 144 ++++++++++++++++++++++++++++++++++-------- mm/page_isolation.c | 6 ++ mm/page_reporting.c | 10 +++ mm/swap.c | 12 +++- 8 files changed, 247 insertions(+), 28 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index dc33f8269fb5..2369ebedb8bd 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1312,6 +1312,7 @@ static inline struct folio *virt_to_folio(const void *x) } void __folio_put(struct folio *folio); +void __folio_put_ugen(struct folio *folio, unsigned short int ugen); void put_pages_list(struct list_head *pages); @@ -1509,6 +1510,27 @@ static inline void folio_put(struct folio *folio) __folio_put(folio); } +/** + * folio_put_ugen - Decrement the last reference count on a folio. + * @folio: The folio. + * @ugen: The unmap generation # of TLB flush that the folio requires. + * + * The folio's reference count should be one since the only user, folio + * migration code, calls folio_put_ugen() only when the folio has no + * reference else. The memory will be released back to the page + * allocator and may be used by another allocation immediately. Do not + * access the memory or the struct folio after calling folio_put_ugen(). + * + * Context: May be called in process or interrupt context, but not in NMI + * context. May be called while holding a spinlock. + */ +static inline void folio_put_ugen(struct folio *folio, unsigned short int ugen) +{ + if (WARN_ON(!folio_put_testzero(folio))) + return; + __folio_put_ugen(folio, ugen); +} + /** * folio_put_refs - Reduce the reference count on a folio. * @folio: The folio. diff --git a/include/linux/sched.h b/include/linux/sched.h index 4118b3f959c3..2aa48adad226 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1339,6 +1339,7 @@ struct task_struct { #endif struct tlbflush_unmap_batch tlb_ubc; + unsigned short int ugen; /* Cache last used pipe for splice(): */ struct pipe_inode_info *splice_pipe; diff --git a/mm/compaction.c b/mm/compaction.c index e731d45befc7..13799fbb2a9a 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -701,6 +701,11 @@ static unsigned long isolate_freepages_block(struct compact_control *cc, if (locked) spin_unlock_irqrestore(&cc->zone->lock, flags); + /* + * Check and flush before using the isolated pages. + */ + check_flush_task_ugen(); + /* * Be careful to not go outside of the pageblock. */ @@ -1673,6 +1678,11 @@ static void fast_isolate_freepages(struct compact_control *cc) spin_unlock_irqrestore(&cc->zone->lock, flags); + /* + * Check and flush before using the isolated pages. + */ + check_flush_task_ugen(); + /* Skip fast search if enough freepages isolated */ if (cc->nr_freepages >= cc->nr_migratepages) break; diff --git a/mm/internal.h b/mm/internal.h index eb9c7d8650fc..332662047c17 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -638,7 +638,7 @@ extern bool free_pages_prepare(struct page *page, unsigned int order); extern int user_min_free_kbytes; -void free_unref_page(struct page *page, unsigned int order); +void free_unref_page(struct page *page, unsigned int order, unsigned short int ugen); void free_unref_folios(struct folio_batch *fbatch); extern void zone_pcp_reset(struct zone *zone); @@ -1512,4 +1512,72 @@ static inline void shrinker_debugfs_remove(struct dentry *debugfs_entry, void workingset_update_node(struct xa_node *node); extern struct list_lru shadow_nodes; +#if defined(CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH) +static inline unsigned short int ugen_latest(unsigned short int a, unsigned short int b) +{ + if (!a || !b) + return a + b; + + /* + * The ugen is wrapped around so let's use this trick. + */ + if ((short int)(a - b) < 0) + return b; + else + return a; +} + +static inline void update_task_ugen(unsigned short int ugen) +{ + current->ugen = ugen_latest(current->ugen, ugen); +} + +static inline unsigned short int hand_over_task_ugen(void) +{ + unsigned short int ret = current->ugen; + + current->ugen = 0; + return ret; +} + +static inline void check_flush_task_ugen(void) +{ + /* + * XXX: luf mechanism will handle this. For now, do nothing but + * reset current's ugen to finalize this turn. + */ + current->ugen = 0; +} + +/* + * Check the constratints of what luf currently supports. + */ +static inline bool can_luf_folio(struct folio *f) +{ + bool can_luf = true; + + /* + * XXX: Remove the constraint once luf handles zone device folio. + */ + can_luf = can_luf && likely(!folio_is_zone_device(f)); + + /* + * XXX: Remove the constraint once luf handles hugetlb folio. + */ + can_luf = can_luf && likely(!folio_test_hugetlb(f)); + + /* + * XXX: Remove the constraint once luf handles large folio. + */ + can_luf = can_luf && likely(!folio_test_large(f)); + + return can_luf; +} +#else /* CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH */ +static inline unsigned short int ugen_latest(unsigned short int a, unsigned short int b) { return 0; } +static inline void update_task_ugen(unsigned short int ugen) {} +static inline unsigned short int hand_over_task_ugen(void) { return 0; } +static inline void check_flush_task_ugen(void) {} +static inline bool can_luf_folio(struct folio *f) { return false; } +#endif #endif /* __MM_INTERNAL_H */ diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 917b22b429d1..2cd278c207d1 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -696,6 +696,7 @@ static inline void __del_page_from_free_list(struct page *page, struct zone *zon if (page_reported(page)) __ClearPageReported(page); + update_task_ugen(page_buddy_ugen(page)); list_del(&page->buddy_list); __ClearPageBuddy(page); set_page_private(page, 0); @@ -768,7 +769,7 @@ buddy_merge_likely(unsigned long pfn, unsigned long buddy_pfn, static inline void __free_one_page(struct page *page, unsigned long pfn, struct zone *zone, unsigned int order, - int migratetype, fpi_t fpi_flags) + int migratetype, fpi_t fpi_flags, unsigned short int ugen) { struct capture_control *capc = task_capc(zone); unsigned long buddy_pfn = 0; @@ -783,12 +784,22 @@ static inline void __free_one_page(struct page *page, VM_BUG_ON_PAGE(pfn & ((1 << order) - 1), page); VM_BUG_ON_PAGE(bad_range(zone, page), page); + /* + * Ensure private is zero before using it inside buddy. + */ + set_page_private(page, 0); + account_freepages(zone, 1 << order, migratetype); while (order < MAX_PAGE_ORDER) { int buddy_mt = migratetype; if (compaction_capture(capc, page, order, migratetype)) { + /* + * Capturer will check_flush_task_ugen() through + * prep_new_page(). + */ + update_task_ugen(ugen); account_freepages(zone, -(1 << order), migratetype); return; } @@ -819,6 +830,11 @@ static inline void __free_one_page(struct page *page, if (page_is_guard(buddy)) clear_page_guard(zone, buddy, order); else + /* + * __del_page_from_free_list() updates current's + * ugen that pairs with hand_over_task_ugen() below + * in this funtion. + */ __del_page_from_free_list(buddy, zone, order, buddy_mt); if (unlikely(buddy_mt != migratetype)) { @@ -837,7 +853,8 @@ static inline void __free_one_page(struct page *page, } done_merging: - set_buddy_order_ugen(page, order, 0); + ugen = ugen_latest(ugen, hand_over_task_ugen()); + set_buddy_order_ugen(page, order, ugen); if (fpi_flags & FPI_TO_TAIL) to_tail = true; @@ -1048,6 +1065,11 @@ __always_inline bool free_pages_prepare(struct page *page, VM_BUG_ON_PAGE(PageTail(page), page); + /* + * Ensure private is zero before using it inside pcp. + */ + set_page_private(page, 0); + trace_mm_page_free(page, order); kmsan_free_page(page, order); @@ -1179,17 +1201,23 @@ static void free_pcppages_bulk(struct zone *zone, int count, do { unsigned long pfn; int mt; + unsigned short int ugen; page = list_last_entry(list, struct page, pcp_list); pfn = page_to_pfn(page); mt = get_pfnblock_migratetype(page, pfn); + /* + * pcp uses private to store ugen. + */ + ugen = page_private(page); + /* must delete to avoid corrupting pcp list */ list_del(&page->pcp_list); count -= nr_pages; pcp->count -= nr_pages; - __free_one_page(page, pfn, zone, order, mt, FPI_NONE); + __free_one_page(page, pfn, zone, order, mt, FPI_NONE, ugen); trace_mm_page_pcpu_drain(page, order, mt); } while (count > 0 && !list_empty(list)); } @@ -1199,14 +1227,14 @@ static void free_pcppages_bulk(struct zone *zone, int count, static void free_one_page(struct zone *zone, struct page *page, unsigned long pfn, unsigned int order, - fpi_t fpi_flags) + fpi_t fpi_flags, unsigned short int ugen) { unsigned long flags; int migratetype; spin_lock_irqsave(&zone->lock, flags); migratetype = get_pfnblock_migratetype(page, pfn); - __free_one_page(page, pfn, zone, order, migratetype, fpi_flags); + __free_one_page(page, pfn, zone, order, migratetype, fpi_flags, ugen); spin_unlock_irqrestore(&zone->lock, flags); } @@ -1219,7 +1247,7 @@ static void __free_pages_ok(struct page *page, unsigned int order, if (!free_pages_prepare(page, order)) return; - free_one_page(zone, page, pfn, order, fpi_flags); + free_one_page(zone, page, pfn, order, fpi_flags, 0); __count_vm_events(PGFREE, 1 << order); } @@ -1484,6 +1512,10 @@ inline void post_alloc_hook(struct page *page, unsigned int order, static void prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags, unsigned int alloc_flags) { + /* + * Check and flush before using the pages. + */ + check_flush_task_ugen(); post_alloc_hook(page, order, gfp_flags); if (order && (gfp_flags & __GFP_COMP)) @@ -1519,6 +1551,10 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order, page = get_page_from_free_area(area, migratetype); if (!page) continue; + /* + * del_page_from_free_list() updates current's ugen that + * pairs with check_flush_task_ugen() in prep_new_page(). + */ del_page_from_free_list(page, zone, current_order, migratetype); expand(zone, page, order, current_order, migratetype); trace_mm_page_alloc_zone_locked(page, order, migratetype, @@ -1681,7 +1717,8 @@ static unsigned long find_large_buddy(unsigned long start_pfn) /* Split a multi-block free page into its individual pageblocks */ static void split_large_buddy(struct zone *zone, struct page *page, - unsigned long pfn, int order) + unsigned long pfn, int order, + unsigned short int ugen) { unsigned long end_pfn = pfn + (1 << order); @@ -1694,7 +1731,7 @@ static void split_large_buddy(struct zone *zone, struct page *page, while (pfn != end_pfn) { int mt = get_pfnblock_migratetype(page, pfn); - __free_one_page(page, pfn, zone, pageblock_order, mt, FPI_NONE); + __free_one_page(page, pfn, zone, pageblock_order, mt, FPI_NONE, ugen); pfn += pageblock_nr_pages; page = pfn_to_page(pfn); } @@ -1736,22 +1773,34 @@ bool move_freepages_block_isolate(struct zone *zone, struct page *page, if (pfn != start_pfn) { struct page *buddy = pfn_to_page(pfn); int order = buddy_order(buddy); + unsigned short int ugen; + /* + * del_page_from_free_list() updates current's ugen that + * pairs with the following hand_over_task_ugen(). + */ del_page_from_free_list(buddy, zone, order, get_pfnblock_migratetype(buddy, pfn)); + ugen = hand_over_task_ugen(); set_pageblock_migratetype(page, migratetype); - split_large_buddy(zone, buddy, pfn, order); + split_large_buddy(zone, buddy, pfn, order, ugen); return true; } /* We're the starting block of a larger buddy */ if (PageBuddy(page) && buddy_order(page) > pageblock_order) { int order = buddy_order(page); + unsigned short int ugen; + /* + * del_page_from_free_list() updates current's ugen that + * pairs with the following hand_over_task_ugen(). + */ del_page_from_free_list(page, zone, order, get_pfnblock_migratetype(page, pfn)); + ugen = hand_over_task_ugen(); set_pageblock_migratetype(page, migratetype); - split_large_buddy(zone, page, pfn, order); + split_large_buddy(zone, page, pfn, order, ugen); return true; } move: @@ -1871,6 +1920,10 @@ steal_suitable_fallback(struct zone *zone, struct page *page, /* Take ownership for orders >= pageblock_order */ if (current_order >= pageblock_order) { + /* + * del_page_from_free_list() updates current's ugen that + * pairs with check_flush_task_ugen() in prep_new_page(). + */ del_page_from_free_list(page, zone, current_order, block_type); change_pageblock_range(page, current_order, start_type); expand(zone, page, order, current_order, start_type); @@ -1926,6 +1979,10 @@ steal_suitable_fallback(struct zone *zone, struct page *page, } single_page: + /* + * del_page_from_free_list() updates current's ugen that pairs + * with check_flush_task_ugen() in prep_new_page(). + */ del_page_from_free_list(page, zone, current_order, block_type); expand(zone, page, order, current_order, block_type); return page; @@ -2547,7 +2604,7 @@ static int nr_pcp_high(struct per_cpu_pages *pcp, struct zone *zone, static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp, struct page *page, int migratetype, - unsigned int order) + unsigned int order, unsigned short int ugen) { int high, batch; int pindex; @@ -2561,6 +2618,11 @@ static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp, pcp->alloc_factor >>= 1; __count_vm_events(PGFREE, 1 << order); pindex = order_to_pindex(migratetype, order); + + /* + * pcp uses private to store ugen. + */ + set_page_private(page, ugen); list_add(&page->pcp_list, &pcp->lists[pindex]); pcp->count += 1 << order; @@ -2596,7 +2658,8 @@ static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp, /* * Free a pcp page */ -void free_unref_page(struct page *page, unsigned int order) +void free_unref_page(struct page *page, unsigned int order, + unsigned short int ugen) { unsigned long __maybe_unused UP_flags; struct per_cpu_pages *pcp; @@ -2622,7 +2685,7 @@ void free_unref_page(struct page *page, unsigned int order) migratetype = get_pfnblock_migratetype(page, pfn); if (unlikely(migratetype >= MIGRATE_PCPTYPES)) { if (unlikely(is_migrate_isolate(migratetype))) { - free_one_page(page_zone(page), page, pfn, order, FPI_NONE); + free_one_page(page_zone(page), page, pfn, order, FPI_NONE, ugen); return; } migratetype = MIGRATE_MOVABLE; @@ -2632,10 +2695,10 @@ void free_unref_page(struct page *page, unsigned int order) pcp_trylock_prepare(UP_flags); pcp = pcp_spin_trylock(zone->per_cpu_pageset); if (pcp) { - free_unref_page_commit(zone, pcp, page, migratetype, order); + free_unref_page_commit(zone, pcp, page, migratetype, order, ugen); pcp_spin_unlock(pcp); } else { - free_one_page(zone, page, pfn, order, FPI_NONE); + free_one_page(zone, page, pfn, order, FPI_NONE, ugen); } pcp_trylock_finish(UP_flags); } @@ -2666,7 +2729,7 @@ void free_unref_folios(struct folio_batch *folios) */ if (!pcp_allowed_order(order)) { free_one_page(folio_zone(folio), &folio->page, - pfn, order, FPI_NONE); + pfn, order, FPI_NONE, 0); continue; } folio->private = (void *)(unsigned long)order; @@ -2702,7 +2765,7 @@ void free_unref_folios(struct folio_batch *folios) */ if (is_migrate_isolate(migratetype)) { free_one_page(zone, &folio->page, pfn, - order, FPI_NONE); + order, FPI_NONE, 0); continue; } @@ -2715,7 +2778,7 @@ void free_unref_folios(struct folio_batch *folios) if (unlikely(!pcp)) { pcp_trylock_finish(UP_flags); free_one_page(zone, &folio->page, pfn, - order, FPI_NONE); + order, FPI_NONE, 0); continue; } locked_zone = zone; @@ -2730,7 +2793,7 @@ void free_unref_folios(struct folio_batch *folios) trace_mm_page_free_batched(&folio->page); free_unref_page_commit(zone, pcp, &folio->page, migratetype, - order); + order, 0); } if (pcp) { @@ -2781,6 +2844,11 @@ int __isolate_free_page(struct page *page, unsigned int order) return 0; } + /* + * del_page_from_free_list() updates current's ugen. The user of + * the isolated page should check_flush_task_ugen() before using + * it. + */ del_page_from_free_list(page, zone, order, mt); /* @@ -2822,7 +2890,7 @@ void __putback_isolated_page(struct page *page, unsigned int order, int mt) /* Return isolated page to tail of freelist. */ __free_one_page(page, page_to_pfn(page), zone, order, mt, - FPI_SKIP_REPORT_NOTIFY | FPI_TO_TAIL); + FPI_SKIP_REPORT_NOTIFY | FPI_TO_TAIL, 0); } /* @@ -2965,6 +3033,11 @@ struct page *__rmqueue_pcplist(struct zone *zone, unsigned int order, } page = list_first_entry(list, struct page, pcp_list); + + /* + * Pairs with check_flush_task_ugen() in prep_new_page(). + */ + update_task_ugen(page_private(page)); list_del(&page->pcp_list); pcp->count -= 1 << order; } while (check_new_pages(page, order)); @@ -4791,11 +4864,11 @@ void __free_pages(struct page *page, unsigned int order) struct alloc_tag *tag = pgalloc_tag_get(page); if (put_page_testzero(page)) - free_unref_page(page, order); + free_unref_page(page, order, 0); else if (!head) { pgalloc_tag_sub_pages(tag, (1 << order) - 1); while (order-- > 0) - free_unref_page(page + (1 << order), order); + free_unref_page(page + (1 << order), order, 0); } } EXPORT_SYMBOL(__free_pages); @@ -4857,7 +4930,7 @@ void __page_frag_cache_drain(struct page *page, unsigned int count) VM_BUG_ON_PAGE(page_ref_count(page) == 0, page); if (page_ref_sub_and_test(page, count)) - free_unref_page(page, compound_order(page)); + free_unref_page(page, compound_order(page), 0); } EXPORT_SYMBOL(__page_frag_cache_drain); @@ -4898,7 +4971,7 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, goto refill; if (unlikely(nc->pfmemalloc)) { - free_unref_page(page, compound_order(page)); + free_unref_page(page, compound_order(page), 0); goto refill; } @@ -4942,7 +5015,7 @@ void page_frag_free(void *addr) struct page *page = virt_to_head_page(addr); if (unlikely(put_page_testzero(page))) - free_unref_page(page, compound_order(page)); + free_unref_page(page, compound_order(page), 0); } EXPORT_SYMBOL(page_frag_free); @@ -6751,10 +6824,19 @@ void __offline_isolated_pages(unsigned long start_pfn, unsigned long end_pfn) BUG_ON(!PageBuddy(page)); VM_WARN_ON(get_pageblock_migratetype(page) != MIGRATE_ISOLATE); order = buddy_order(page); + /* + * del_page_from_free_list() updates current's ugen that + * pairs with check_flush_task_ugen() below in this function. + */ del_page_from_free_list(page, zone, order, MIGRATE_ISOLATE); pfn += (1 << order); } spin_unlock_irqrestore(&zone->lock, flags); + + /* + * Check and flush before using it. + */ + check_flush_task_ugen(); } #endif @@ -6830,6 +6912,11 @@ bool take_page_off_buddy(struct page *page) int migratetype = get_pfnblock_migratetype(page_head, pfn_head); + /* + * del_page_from_free_list() updates current's + * ugen that pairs with check_flush_task_ugen() below + * in this function. + */ del_page_from_free_list(page_head, zone, page_order, migratetype); break_down_buddy_pages(zone, page_head, page, 0, @@ -6842,6 +6929,11 @@ bool take_page_off_buddy(struct page *page) break; } spin_unlock_irqrestore(&zone->lock, flags); + + /* + * Check and flush before using it. + */ + check_flush_task_ugen(); return ret; } @@ -6860,7 +6952,7 @@ bool put_page_back_buddy(struct page *page) int migratetype = get_pfnblock_migratetype(page, pfn); ClearPageHWPoisonTakenOff(page); - __free_one_page(page, pfn, zone, 0, migratetype, FPI_NONE); + __free_one_page(page, pfn, zone, 0, migratetype, FPI_NONE, 0); if (TestClearPageHWPoison(page)) { ret = true; } diff --git a/mm/page_isolation.c b/mm/page_isolation.c index 042937d5abe4..5823da60a621 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -260,6 +260,12 @@ static void unset_migratetype_isolate(struct page *page, int migratetype) zone->nr_isolate_pageblock--; out: spin_unlock_irqrestore(&zone->lock, flags); + + /* + * Check and flush for the pages that have been isolated. + */ + if (isolated_page) + check_flush_task_ugen(); } static inline struct page * diff --git a/mm/page_reporting.c b/mm/page_reporting.c index e4c428e61d8c..4f94a3ea1b22 100644 --- a/mm/page_reporting.c +++ b/mm/page_reporting.c @@ -221,6 +221,11 @@ page_reporting_cycle(struct page_reporting_dev_info *prdev, struct zone *zone, /* release lock before waiting on report processing */ spin_unlock_irq(&zone->lock); + /* + * Check and flush before using the isolated pages. + */ + check_flush_task_ugen(); + /* begin processing pages in local list */ err = prdev->report(prdev, sgl, PAGE_REPORTING_CAPACITY); @@ -253,6 +258,11 @@ page_reporting_cycle(struct page_reporting_dev_info *prdev, struct zone *zone, spin_unlock_irq(&zone->lock); + /* + * Check and flush before using the isolated pages. + */ + check_flush_task_ugen(); + return err; } diff --git a/mm/swap.c b/mm/swap.c index f0d478eee292..0fc5a5e8457f 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -126,10 +126,20 @@ void __folio_put(struct folio *folio) if (folio_test_large(folio) && folio_test_large_rmappable(folio)) folio_undo_large_rmappable(folio); mem_cgroup_uncharge(folio); - free_unref_page(&folio->page, folio_order(folio)); + free_unref_page(&folio->page, folio_order(folio), 0); } EXPORT_SYMBOL(__folio_put); +void __folio_put_ugen(struct folio *folio, unsigned short int ugen) +{ + if (WARN_ON(!can_luf_folio(folio))) + return; + + page_cache_release(folio); + mem_cgroup_uncharge(folio); + free_unref_page(&folio->page, 0, ugen); +} + /** * put_pages_list() - release a list of pages * @pages: list of pages threaded on page->lru From patchwork Fri May 10 06:52:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13660938 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0B959C25B10 for ; Fri, 10 May 2024 06:52:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5D1346B009B; Fri, 10 May 2024 02:52:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 52DD16B009F; Fri, 10 May 2024 02:52:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 177E06B009D; Fri, 10 May 2024 02:52:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id ED6A96B009B for ; Fri, 10 May 2024 02:52:26 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id B555E1C1E9B for ; Fri, 10 May 2024 06:52:26 +0000 (UTC) X-FDA: 82101567492.21.F5BD7A4 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by imf10.hostedemail.com (Postfix) with ESMTP id CDB89C0003 for ; Fri, 10 May 2024 06:52:24 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf10.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1715323945; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references; bh=ZKx3F49B6gn8AWD8yrAfoz7FZCy6q02Z9LdSwJIimPw=; b=73SxNU0DJ1q1XoJhbqBSwFy2x8ArT+Q4cTL3taYJwSPr9u4kaO22Xfmg/OgeOVVU1JSCQH T3R+kxY14Yu+nV12fJn6X7kX9EoMsxCOx5WHel3nJdntieoHXM5ZNnO2a37bmEC74wLNQF qiAQTpwtFMONW+8PfKPG9csbNGh6pqw= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf10.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1715323945; a=rsa-sha256; cv=none; b=uPF72QizmnXlyNj4PjVlYHGftScJqe1wta2ICcik8GtYt40pJ6LwqeI9Lecu/msTc4p0Qq lJ7KHYWQ0cUPJnbU432WBI+4vanZnJZwbrCAd2jUzCtkxY6MCFf5qDIJKc4NdlTOTool66 wVe8FSuGWnsOe88q+pvCXkQ6uzM6Bvo= X-AuditID: a67dfc5b-d6dff70000001748-e2-663dc4213de4 From: Byungchul Park To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: kernel_team@skhynix.com, akpm@linux-foundation.org, ying.huang@intel.com, vernhao@tencent.com, mgorman@techsingularity.net, hughd@google.com, willy@infradead.org, david@redhat.com, peterz@infradead.org, luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, rjgolo@gmail.com Subject: [PATCH v10 07/12] mm: add a parameter, unmap generation number, to free_unref_folios() Date: Fri, 10 May 2024 15:52:01 +0900 Message-Id: <20240510065206.76078-8-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240510065206.76078-1-byungchul@sk.com> References: <20240510065206.76078-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrCLMWRmVeSWpSXmKPExsXC9ZZnka7iEds0gz8v1S3mrF/DZvF5wz82 ixcb2hktvq7/xWzx9FMfi8XlXXPYLO6t+c9qcX7XWlaLHUv3MVlcOrCAyeJ47wEmi/n3PrNZ bN40ldni+JSpjBa/fwAVn5w1mcVBwON7ax+Lx85Zd9k9Fmwq9di8Qstj8Z6XTB6bVnWyeWz6 NInd4925c+weJ2b8ZvGYdzLQ4/2+q2weW3/ZeTROvcbm8XmTXABfFJdNSmpOZllqkb5dAlfG pm8T2AruqlV03fFvYNwp38XIySEhYCLR9OYNI4zduOYlE4jNJqAucePGT2YQW0TATOJg6x92 EJtZ4C6TxIF+NhBbWCBeYu25DWBxFgFViZN7p7J0MXJw8AqYSvw4rAkxUl5i9YYDYGM4gcZ8 WDYDbLwQUEnHg4lAa7mAat6zSfy7sJcJokFS4uCKGywTGHkXMDKsYhTKzCvLTczMMdHLqMzL rNBLzs/dxAgM+2W1f6J3MH66EHyIUYCDUYmHd8dmmzQh1sSy4srcQ4wSHMxKIrxVNdZpQrwp iZVVqUX58UWlOanFhxilOViUxHmNvpWnCAmkJ5akZqemFqQWwWSZODilGhhbmLIy56/pml1h klWpybhetmq1VL30mpuecbt7lzwN3dpdeuWbZnbpyswgfpfIdxtFhL9o5S9NfP/v9sykSUoz FU8m2jdGLvoktfm8RtvJmzoNXyc0leu8q3qV83OHjfmUy027BWrm25u+vrv83bme12cWsdsH /nXaEaKwcpoLT4ne3WIBnjdKLMUZiYZazEXFiQArqVPSdwIAAA== X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrNLMWRmVeSWpSXmKPExsXC5WfdrKt4xDbNYP5cOYs569ewWXze8I/N 4sWGdkaLr+t/MVs8/dTHYnF47klWi8u75rBZ3Fvzn9Xi/K61rBY7lu5jsrh0YAGTxfHeA0wW 8+99ZrPYvGkqs8XxKVMZLX7/ACo+OWsyi4Ogx/fWPhaPnbPusnss2FTqsXmFlsfiPS+ZPDat 6mTz2PRpErvHu3Pn2D1OzPjN4jHvZKDH+31X2TwWv/jA5LH1l51H49RrbB6fN8kF8Edx2aSk 5mSWpRbp2yVwZWz6NoGt4K5aRdcd/wbGnfJdjJwcEgImEo1rXjKB2GwC6hI3bvxkBrFFBMwk Drb+YQexmQXuMkkc6GcDsYUF4iXWntsAFmcRUJU4uXcqSxcjBwevgKnEj8OaECPlJVZvOAA2 hhNozIdlM8DGCwGVdDyYyDiBkWsBI8MqRpHMvLLcxMwcU73i7IzKvMwKveT83E2MwCBeVvtn 4g7GL5fdDzEKcDAq8fDu2GyTJsSaWFZcmXuIUYKDWUmEt6rGOk2INyWxsiq1KD++qDQntfgQ ozQHi5I4r1d4aoKQQHpiSWp2ampBahFMlomDU6qBsf7FjoWTtidEVrX2HOhfUaE1h+frgeUy /65/v2G/ZFfFwdtPPrmu/sCTI3j2+cP9ueyK3Nxmpb3L6pX89wYda+Osc40veJQupFaU1Wkb eDylkuv8kVXiXTWzE2QtbPle+BX4q4VFLfrr8PhC+ucJ/yOT6z6yWh21yw++J7SkKOjPjjDB pQ93K7EUZyQaajEXFScCAFRkCp5eAgAA X-CFilter-Loop: Reflected X-Stat-Signature: yw4oyrujiiytktfy8x71k33k4613geai X-Rspamd-Queue-Id: CDB89C0003 X-Rspam-User: X-Rspamd-Server: rspam12 X-HE-Tag: 1715323944-329584 X-HE-Meta: U2FsdGVkX1+AoeOrnNPtYR6i+xuDQHMFLkc4cmix0NCwdWW36E5/WIoAmY4WwKb8+qviNQ0ymn/yWerujGi1wil7c+Q6jc45qDs1ukcnGsVQik1bnmhTPsawTnupDsbsn1Qts8Rm3nZWYKK4VggHYaKog8eh8pEGdLLxU90lZj1cJT9Glw4rfkj+74oHw6m8pYwrgNpFhjdb16YbdXZ2kjGYDIETvS//AmevghxzbwxFVtBQjPSMRUDZEi+Ybw20fa4o3k6dfiW1KBxLengh+skHCM4og6aka+5twMRZbhr1XcFRBbgCS5LCdrdeIyJKh3/lj4fhLblVu0S3Q1LizzSS8kwrmZxWD4uNUg8pTlFYTi8vaEs4qLfeNIzBJfiZ8dmuEdE6X90M4m3bo2aaPCb39yCil+dgLPtsGRqpkRp6H6y4I6/PG2BKNBUp32w5p5c3twB83C988MNFVoL1hZ3CDzWXaojfW4LuQF9BaX57/Npu273n0L1T8ivbeNJZYTSGreCNOOH22fqOIIanJAuHGPWDEzY9ducGlUBA9Kem8oPm4/PkRntd9VA8wXF2uf2kFBw5INj05y2xRbtyZvKdOJT0iq9Eea4sfz2LNV7AMRfIm9IuniPCT8ikuyjxEzo+w5cI4pnH95MT+8rmtUrIBb/eE8Swy42sIDPpY0vmxB+R3uOqdKq9Dr/srE7UjuRtyn1tKjHmAiJAVwfUdRXHcNNguU0jUkPD2DjyZcdqRfXaviDqIM9GFWgAlkWSt69iTHrI3vsoKGLLcUDc3VN+eTjyG/C/vob+gYjgid2pGTgZlPKEsm96/E5wSoWiJUGKbH4BVcBLk8RVzDVnT3/jv3COUVnFldRzyxJ571xGib8vSU71V+pJCMWbpLTcBv08mp0d30soOLzgOtkGwh0rvtVNdNQIAgnD7Ikw+dDlGdGDhNLtikuINNVYaWmujw9nSqoGwvkjjnTxxuI vWFaCjNh MG8ojZqbDkFP+ejt4WsRiU33CgkhHGgFyj+ws X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Unmap generation number is used by luf mechanism to track need of tlb flush for each page residing in pcp or buddy. The number should be delivered to pcp or buddy via free_unref_folios() that is for releasing folios that have been unmapped during reclaim in shrink_folio_list(). Signed-off-by: Byungchul Park --- mm/internal.h | 2 +- mm/page_alloc.c | 10 +++++----- mm/swap.c | 6 +++--- mm/vmscan.c | 8 ++++---- 4 files changed, 13 insertions(+), 13 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 332662047c17..0d4c74e76de6 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -639,7 +639,7 @@ extern bool free_pages_prepare(struct page *page, unsigned int order); extern int user_min_free_kbytes; void free_unref_page(struct page *page, unsigned int order, unsigned short int ugen); -void free_unref_folios(struct folio_batch *fbatch); +void free_unref_folios(struct folio_batch *fbatch, unsigned short int ugen); extern void zone_pcp_reset(struct zone *zone); extern void zone_pcp_disable(struct zone *zone); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 2cd278c207d1..63f14305f4de 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2706,7 +2706,7 @@ void free_unref_page(struct page *page, unsigned int order, /* * Free a batch of folios */ -void free_unref_folios(struct folio_batch *folios) +void free_unref_folios(struct folio_batch *folios, unsigned short int ugen) { unsigned long __maybe_unused UP_flags; struct per_cpu_pages *pcp = NULL; @@ -2729,7 +2729,7 @@ void free_unref_folios(struct folio_batch *folios) */ if (!pcp_allowed_order(order)) { free_one_page(folio_zone(folio), &folio->page, - pfn, order, FPI_NONE, 0); + pfn, order, FPI_NONE, ugen); continue; } folio->private = (void *)(unsigned long)order; @@ -2765,7 +2765,7 @@ void free_unref_folios(struct folio_batch *folios) */ if (is_migrate_isolate(migratetype)) { free_one_page(zone, &folio->page, pfn, - order, FPI_NONE, 0); + order, FPI_NONE, ugen); continue; } @@ -2778,7 +2778,7 @@ void free_unref_folios(struct folio_batch *folios) if (unlikely(!pcp)) { pcp_trylock_finish(UP_flags); free_one_page(zone, &folio->page, pfn, - order, FPI_NONE, 0); + order, FPI_NONE, ugen); continue; } locked_zone = zone; @@ -2793,7 +2793,7 @@ void free_unref_folios(struct folio_batch *folios) trace_mm_page_free_batched(&folio->page); free_unref_page_commit(zone, pcp, &folio->page, migratetype, - order, 0); + order, ugen); } if (pcp) { diff --git a/mm/swap.c b/mm/swap.c index 0fc5a5e8457f..1937ac937b8f 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -163,11 +163,11 @@ void put_pages_list(struct list_head *pages) /* LRU flag must be clear because it's passed using the lru */ if (folio_batch_add(&fbatch, folio) > 0) continue; - free_unref_folios(&fbatch); + free_unref_folios(&fbatch, 0); } if (fbatch.nr) - free_unref_folios(&fbatch); + free_unref_folios(&fbatch, 0); INIT_LIST_HEAD(pages); } EXPORT_SYMBOL(put_pages_list); @@ -1029,7 +1029,7 @@ void folios_put_refs(struct folio_batch *folios, unsigned int *refs) folios->nr = j; mem_cgroup_uncharge_folios(folios); - free_unref_folios(folios); + free_unref_folios(folios, 0); } EXPORT_SYMBOL(folios_put_refs); diff --git a/mm/vmscan.c b/mm/vmscan.c index 49bd94423961..bb0ff11f9ec9 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1460,7 +1460,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, if (folio_batch_add(&free_folios, folio) == 0) { mem_cgroup_uncharge_folios(&free_folios); try_to_unmap_flush(); - free_unref_folios(&free_folios); + free_unref_folios(&free_folios, 0); } continue; @@ -1527,7 +1527,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, mem_cgroup_uncharge_folios(&free_folios); try_to_unmap_flush(); - free_unref_folios(&free_folios); + free_unref_folios(&free_folios, 0); list_splice(&ret_folios, folio_list); count_vm_events(PGACTIVATE, pgactivate); @@ -1869,7 +1869,7 @@ static unsigned int move_folios_to_lru(struct lruvec *lruvec, if (folio_batch_add(&free_folios, folio) == 0) { spin_unlock_irq(&lruvec->lru_lock); mem_cgroup_uncharge_folios(&free_folios); - free_unref_folios(&free_folios); + free_unref_folios(&free_folios, 0); spin_lock_irq(&lruvec->lru_lock); } @@ -1891,7 +1891,7 @@ static unsigned int move_folios_to_lru(struct lruvec *lruvec, if (free_folios.nr) { spin_unlock_irq(&lruvec->lru_lock); mem_cgroup_uncharge_folios(&free_folios); - free_unref_folios(&free_folios); + free_unref_folios(&free_folios, 0); spin_lock_irq(&lruvec->lru_lock); } From patchwork Fri May 10 06:52:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13660940 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68EC8C25B10 for ; Fri, 10 May 2024 06:52:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C76AC6B009E; Fri, 10 May 2024 02:52:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BFF036B009F; Fri, 10 May 2024 02:52:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9DBFB6B00A0; Fri, 10 May 2024 02:52:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 80F076B009E for ; Fri, 10 May 2024 02:52:28 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 41C40C04CE for ; Fri, 10 May 2024 06:52:28 +0000 (UTC) X-FDA: 82101567576.09.6F2791C Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by imf02.hostedemail.com (Postfix) with ESMTP id 6596280007 for ; Fri, 10 May 2024 06:52:26 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=none; spf=pass (imf02.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1715323946; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references; bh=x9wh2Uy2LBAY2jLtNMSHkvHujd6XB9pxIP0VooR2LTw=; b=tI7GuWHqDNqnYQFozdjK7v3ktTSXIp7mA3Rvhwcld8UvKcywUxiQPzhoA140/sd+P8kQEz 31eWILZrVYzR0BoFnv75QjfExNiWjDlSD5PRS1nqL6vBVya7EVpgURxvlfD1A0idwrsdun OcfryJO2Ndqva7iiejFv2zTh/Hd8s8g= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1715323946; a=rsa-sha256; cv=none; b=vyEnpcwfc4N2iTrM79Nth+Fdf/hBWwZ9jgFkBkd8MEIyj7wA3LFa/mYozlHheYJpeBXVjy 3Pot50xEUswprTcxs4P7f1Lez2Era/+8wYYRwSZ6X2Wbtm1zmMAFG3Fy1onGPNwffwrRKs ZABPu1fFKrw4G+uP0a7JsFv7ruL2yA8= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=none; spf=pass (imf02.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com; dmarc=none X-AuditID: a67dfc5b-d6dff70000001748-e7-663dc421aa7f From: Byungchul Park To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: kernel_team@skhynix.com, akpm@linux-foundation.org, ying.huang@intel.com, vernhao@tencent.com, mgorman@techsingularity.net, hughd@google.com, willy@infradead.org, david@redhat.com, peterz@infradead.org, luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, rjgolo@gmail.com Subject: [PATCH v10 08/12] mm/rmap: recognize read-only tlb entries during batched tlb flush Date: Fri, 10 May 2024 15:52:02 +0900 Message-Id: <20240510065206.76078-9-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240510065206.76078-1-byungchul@sk.com> References: <20240510065206.76078-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrKLMWRmVeSWpSXmKPExsXC9ZZnoa7iEds0g4MTtSzmrF/DZvF5wz82 ixcb2hktvq7/xWzx9FMfi8XlXXPYLO6t+c9qcX7XWlaLHUv3MVlcOrCAyeJ47wEmi/n3PrNZ bN40ldni+JSpjBa/fwAVn5w1mcVBwON7ax+Lx85Zd9k9Fmwq9di8Qstj8Z6XTB6bVnWyeWz6 NInd4925c+weJ2b8ZvGYdzLQ4/2+q2weW3/ZeTROvcbm8XmTXABfFJdNSmpOZllqkb5dAldG w7tH7AV3ZCsu7p7B1sB4RKKLkZNDQsBE4u/ucyww9vU3G5hBbDYBdYkbN36C2SICZhIHW/+w g9jMAneZJA70s3UxcnAIC8RINCyyAQmzCKhKXD19kg3E5hUwldhw/QATxEh5idUbDoCN4QQa 82HZDLC4EFBNx4OJjF2MXEA1n9kk3vZPY4ZokJQ4uOIGywRG3gWMDKsYhTLzynITM3NM9DIq 8zIr9JLzczcxAgN/We2f6B2Mny4EH2IU4GBU4uHdsdkmTYg1say4MvcQowQHs5IIb1WNdZoQ b0piZVVqUX58UWlOavEhRmkOFiVxXqNv5SlCAumJJanZqakFqUUwWSYOTqkGxpDXRTu2PZZ/ Y+T9YXruc02NV6vONXfnaXVHOd4UbpguYpJR3xsm2MPL4XlC+aNP4VvD9F3ma/wETkzvWsR6 6RvXYYeX3O1/+3TW6tTquxYWLmwo74yv9Mtp6rdzTEwukSx/nV9V73XSPVGkdBPz2f6PDd4s Loed1ir+5k1i1mztUKza+l6JpTgj0VCLuag4EQBNYvAHeAIAAA== X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrDLMWRmVeSWpSXmKPExsXC5WfdrKt4xDbNYNUHBYs569ewWXze8I/N 4sWGdkaLr+t/MVs8/dTHYnF47klWi8u75rBZ3Fvzn9Xi/K61rBY7lu5jsrh0YAGTxfHeA0wW 8+99ZrPYvGkqs8XxKVMZLX7/ACo+OWsyi4Ogx/fWPhaPnbPusnss2FTqsXmFlsfiPS+ZPDat 6mTz2PRpErvHu3Pn2D1OzPjN4jHvZKDH+31X2TwWv/jA5LH1l51H49RrbB6fN8kF8Edx2aSk 5mSWpRbp2yVwZTS8e8RecEe24uLuGWwNjEckuhg5OSQETCSuv9nADGKzCahL3LjxE8wWETCT ONj6hx3EZha4yyRxoJ+ti5GDQ1ggRqJhkQ1ImEVAVeLq6ZNsIDavgKnEhusHmCBGykus3nAA bAwn0JgPy2aAxYWAajoeTGScwMi1gJFhFaNIZl5ZbmJmjqlecXZGZV5mhV5yfu4mRmAYL6v9 M3EH45fL7ocYBTgYlXh4d2y2SRNiTSwrrsw9xCjBwawkwltVY50mxJuSWFmVWpQfX1Sak1p8 iFGag0VJnNcrPDVBSCA9sSQ1OzW1ILUIJsvEwSnVwOjyVOdO4urdb9Y9eTd7wWrpJXo2rY0a GUlbY+YZsrA07VppI2Qtk//9542Sg04fKvsKeyK8S28vnXqxN/nXhD2Kgl9DWG7d0JENNjzc l65yz+fbCbt3YYHLyquu98y13cCxfmpJWq+uaL/tZb5//W/7TzAcWv3g+XEGSwkbEzsPlruX Qqd/l1FiKc5INNRiLipOBADOpyuuXwIAAA== X-CFilter-Loop: Reflected X-Rspam-User: X-Stat-Signature: kwpe8rhx66p9zrjtwezpe6a97ram8737 X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 6596280007 X-HE-Tag: 1715323946-210060 X-HE-Meta: U2FsdGVkX1+/bCE8YiRkSH/OwflZEalfDOVn4dkwePf09Kyv2LqnlgnhBtTPZdf5L4sT4+FQca5CQesftO7jzB6plPhXdXqyiQaIRAHMSZLTiry3h5IxJYCGg8temVx5LEb4tUFn6ks5thSrCNZQsxdXwYFEW+R4EmpMh2Uz3LifhvKpBqj4XRvge6sQtpS7iHNdwzzC+Ihmnyzx/KeOS7f2HElfN6nKsLUXZFx8b6W4H+nmMLXx2M6w+ggpoYuVvmj1aqng77dgWUftnh6DDmUmpZ4WEGSpn89TKMgtqJI1CxzJUCX6fbjCzwvunwgjryVEEyWDV1uIoSg4jpexLg78aHYOrsUyXakE6iB/kw6srODzAUwGmgSw72LOPS5d9qyZjF0PCzJdyGWRi60KhRQrbKGenXIXhFB1tLxCOdXjVoVpQr0STziMvoFT8gtLGMAAyO4ZoyYuxGMIG6xSOaXMo/wILlzQDP8f0HWN8F8UPhp6pR4HgzP9Qc6s3sW49L9wHJvGk2RqUu0LLvGS2yjkCU0dpf/q0GfD6rXRaJSX9XcYDKtFP/Y1UJet8TpqNd4dn9qD59cVKaTHwcwnQyBKZ6HR4Dh6FWBLOaj9iSp6kS3Z51FSajsFXgthloj6sSz+tl8JZ0jAZ/eIRLhlekt97xuIVGOdqxzL5bOjjfKgC6d8Jj/xzjgLz4u0iKr/eeSNHEYXnx1faz0Y1Ago7ILDkOJjLc04PdQj5ixadif2WcVTLRckgETC1LeJBoHfz+Q1ImU4FmsVvKrF7dxsUgXBEXs8RIGNbqel1Omc2euiDRdbRdVoRu1JWBch/1PLPUp8UmuJ/TWerIflZcW7XG/oEhyEjF9lyP1n4Tm7RmyG1skV3INg6ZZ6naegZlcDhCJeOkxS9YGwlqUEA4SGqLcJBuj6a6a13ZODKCRX5sSuvawuWKlqWlBLZVTunbTbGL9i+rSwDB/eAnv7AiB AgQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Functionally, no change. This is a preparation for luf mechanism that requires to recognize read-only tlb entries and handle them in a different way. The newly introduced API in this patch, fold_ubc(), will be used by luf mechanism. Signed-off-by: Byungchul Park --- include/linux/sched.h | 1 + mm/internal.h | 4 ++++ mm/rmap.c | 34 ++++++++++++++++++++++++++++++++-- 3 files changed, 37 insertions(+), 2 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 2aa48adad226..0915390b1b5e 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1339,6 +1339,7 @@ struct task_struct { #endif struct tlbflush_unmap_batch tlb_ubc; + struct tlbflush_unmap_batch tlb_ubc_ro; unsigned short int ugen; /* Cache last used pipe for splice(): */ diff --git a/mm/internal.h b/mm/internal.h index 0d4c74e76de6..805f0e6ecab4 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1100,6 +1100,7 @@ extern struct workqueue_struct *mm_percpu_wq; void try_to_unmap_flush(void); void try_to_unmap_flush_dirty(void); void flush_tlb_batched_pending(struct mm_struct *mm); +void fold_ubc(struct tlbflush_unmap_batch *dst, struct tlbflush_unmap_batch *src); #else static inline void try_to_unmap_flush(void) { @@ -1110,6 +1111,9 @@ static inline void try_to_unmap_flush_dirty(void) static inline void flush_tlb_batched_pending(struct mm_struct *mm) { } +static inline void fold_ubc(struct tlbflush_unmap_batch *dst, struct tlbflush_unmap_batch *src) +{ +} #endif /* CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH */ extern const struct trace_print_flags pageflag_names[]; diff --git a/mm/rmap.c b/mm/rmap.c index cf8a99a49aef..328b5e2217e6 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -635,6 +635,28 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio, } #ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH + +void fold_ubc(struct tlbflush_unmap_batch *dst, + struct tlbflush_unmap_batch *src) +{ + if (!src->flush_required) + return; + + /* + * Fold src to dst. + */ + arch_tlbbatch_fold(&dst->arch, &src->arch); + dst->writable = dst->writable || src->writable; + dst->flush_required = true; + + /* + * Reset src. + */ + arch_tlbbatch_clear(&src->arch); + src->flush_required = false; + src->writable = false; +} + /* * Flush TLB entries for recently unmapped pages from remote CPUs. It is * important if a PTE was dirty when it was unmapped that it's flushed @@ -644,7 +666,9 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio, void try_to_unmap_flush(void) { struct tlbflush_unmap_batch *tlb_ubc = ¤t->tlb_ubc; + struct tlbflush_unmap_batch *tlb_ubc_ro = ¤t->tlb_ubc_ro; + fold_ubc(tlb_ubc, tlb_ubc_ro); if (!tlb_ubc->flush_required) return; @@ -658,8 +682,9 @@ void try_to_unmap_flush(void) void try_to_unmap_flush_dirty(void) { struct tlbflush_unmap_batch *tlb_ubc = ¤t->tlb_ubc; + struct tlbflush_unmap_batch *tlb_ubc_ro = ¤t->tlb_ubc_ro; - if (tlb_ubc->writable) + if (tlb_ubc->writable || tlb_ubc_ro->writable) try_to_unmap_flush(); } @@ -676,13 +701,18 @@ void try_to_unmap_flush_dirty(void) static void set_tlb_ubc_flush_pending(struct mm_struct *mm, pte_t pteval, unsigned long uaddr) { - struct tlbflush_unmap_batch *tlb_ubc = ¤t->tlb_ubc; + struct tlbflush_unmap_batch *tlb_ubc; int batch; bool writable = pte_dirty(pteval); if (!pte_accessible(mm, pteval)) return; + if (pte_write(pteval)) + tlb_ubc = ¤t->tlb_ubc; + else + tlb_ubc = ¤t->tlb_ubc_ro; + arch_tlbbatch_add_pending(&tlb_ubc->arch, mm, uaddr); tlb_ubc->flush_required = true; From patchwork Fri May 10 06:52:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13660942 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4D10EC25B10 for ; Fri, 10 May 2024 06:52:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CC74A6B009F; Fri, 10 May 2024 02:52:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C28526B00A1; Fri, 10 May 2024 02:52:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8A6BC6B009F; Fri, 10 May 2024 02:52:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 5BD696B00A1 for ; Fri, 10 May 2024 02:52:29 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 14B75161765 for ; Fri, 10 May 2024 06:52:29 +0000 (UTC) X-FDA: 82101567618.14.40E8538 Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by imf30.hostedemail.com (Postfix) with ESMTP id 0838480008 for ; Fri, 10 May 2024 06:52:26 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf30.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1715323947; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references; bh=Ye+uDH4ekKhxitmpX2Za+Hhru+j/PYywAcSJDSAhNF8=; b=4b8SzhlOYypFI3pfLqIOJaK0xCw9k0KE/fIYst3NyhUbl7ULAqSIU3LxML/+Rz5g4VpsiT TfsDzAIMpAARv3VFC8X96FDQB1mi4iCOrAAdRz+w3Ux5V2i5Kfj3e3BtNviidAFG20mp7N kPBVNj3vIxsTvzODJosvn1hK4ipY+fE= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf30.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1715323947; a=rsa-sha256; cv=none; b=AmxOZ4WgLlh7a4CVGu88KZZpjYTkIwHB0iWQaV6rI9hgQnW9yiAMPJ/BmzJuwCsV6ExOKn YYp8YECW52fODOu9TYYBI2wytrXgBhq7y4LcUjUdIo8cwdJ2BnTk/C3BMcdaea7+FBzL6d fTSsd3apjl9EEk0NSNBrnyaJMjVWKmM= X-AuditID: a67dfc5b-d6dff70000001748-ec-663dc42128e6 From: Byungchul Park To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: kernel_team@skhynix.com, akpm@linux-foundation.org, ying.huang@intel.com, vernhao@tencent.com, mgorman@techsingularity.net, hughd@google.com, willy@infradead.org, david@redhat.com, peterz@infradead.org, luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, rjgolo@gmail.com Subject: [PATCH v10 09/12] mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get unmapped Date: Fri, 10 May 2024 15:52:03 +0900 Message-Id: <20240510065206.76078-10-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240510065206.76078-1-byungchul@sk.com> References: <20240510065206.76078-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrOLMWRmVeSWpSXmKPExsXC9ZZnoa7iEds0g7uzjSzmrF/DZvF5wz82 ixcb2hktvq7/xWzx9FMfi8XlXXPYLO6t+c9qcX7XWlaLHUv3MVlcOrCAyeJ47wEmi/n3PrNZ bN40ldni+JSpjBa/fwAVn5w1mcVBwON7ax+Lx85Zd9k9Fmwq9di8Qstj8Z6XTB6bVnWyeWz6 NInd4925c+weJ2b8ZvGYdzLQ4/2+q2weW3/ZeTROvcbm8XmTXABfFJdNSmpOZllqkb5dAlfG 5kkf2Qs+7WGsuN6Y3cB4u5+xi5GTQ0LARGL+wYVsMHbv+58sIDabgLrEjRs/mUFsEQEziYOt f9hBbGaBu0wSB/rB6oUFsiT6Lu5mBbFZBFQltvw+D2bzAtWfvvIVar68xOoNB8DmcALFPyyb wQRiCwmYSnQ8mAhUwwVU855N4mT7BFaIBkmJgytusExg5F3AyLCKUSgzryw3MTPHRC+jMi+z Qi85P3cTIzD4l9X+id7B+OlC8CFGAQ5GJR7eHZtt0oRYE8uKK3MPMUpwMCuJ8FbVWKcJ8aYk VlalFuXHF5XmpBYfYpTmYFES5zX6Vp4iJJCeWJKanZpakFoEk2Xi4JRqYFSJsT6+4JOG2fGI AIM7f5/81PF4Oat7WeDJVdxs9+ozfrmUBPbbHplmt+kB5+aV0cuvz1tyaZESO9PLyK0Huj9t v8IhsU1ULED68qtrc1Zv4GbZUPBjefSJXkZzfbP4SoYbUTMzdopq+ntnOQTtdGJ6abnvYqYD 96tMa2/2HQ9ip9esUCvy2qTEUpyRaKjFXFScCAB7n2jNegIAAA== X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrHLMWRmVeSWpSXmKPExsXC5WfdrKt4xDbNYF2TusWc9WvYLD5v+Mdm 8WJDO6PF1/W/mC2efupjsTg89ySrxeVdc9gs7q35z2pxftdaVosdS/cxWVw6sIDJ4njvASaL +fc+s1ls3jSV2eL4lKmMFr9/ABWfnDWZxUHQ43trH4vHzll32T0WbCr12LxCy2PxnpdMHptW dbJ5bPo0id3j3blz7B4nZvxm8Zh3MtDj/b6rbB6LX3xg8tj6y86jceo1No/Pm+QC+KO4bFJS czLLUov07RK4MjZP+she8GkPY8X1xuwGxtv9jF2MnBwSAiYSve9/soDYbALqEjdu/GQGsUUE zCQOtv5hB7GZBe4ySRzoZwOxhQWyJPou7mYFsVkEVCW2/D4PZvMC1Z++8hVqprzE6g0HwOZw AsU/LJvBBGILCZhKdDyYyDiBkWsBI8MqRpHMvLLcxMwcU73i7IzKvMwKveT83E2MwFBeVvtn 4g7GL5fdDzEKcDAq8fDu2GyTJsSaWFZcmXuIUYKDWUmEt6rGOk2INyWxsiq1KD++qDQntfgQ ozQHi5I4r1d4aoKQQHpiSWp2ampBahFMlomDU6qBMf7jlMfxXAc/ixerSFZk7rH2ymi9qluo 5yPy78WZlKWmeYr35Dru6Bis+bLxj8yc9/IbD740OitS+c3sgNy9Pq91VxVrrv30jU1Qf/Lu GtutOcnP2ZoYxIqE+r5mdnxPCWKyeZy7q27XroQpjc/avJ88OfvntjjP170Chzx+Ku6cp5iY tVm2TYmlOCPRUIu5qDgRAFu35t9hAgAA X-CFilter-Loop: Reflected X-Rspam-User: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 0838480008 X-Stat-Signature: pwps6ybnrykb4gf3u495pxeinefxw85q X-HE-Tag: 1715323946-434396 X-HE-Meta: U2FsdGVkX195vVzTN8qF0ffOMIlSWcWfFhiib3C4UBENXM7pVxJ6cMNpIlTiheQdIiC2vSffXf9qvSZ41GQbML4M1i6FdA3IlnbtnYpwUj+KX7OFmcqQYdpu9kaAbUzdaAsxJfeF7WYGo3jd4Ac7ZEpVz9iDY9oLUx/nnp+0zMnScXg+CoaA1OXXlU31E1DwbWeT/WJsDKL04bfuLHzZlJJVcIIM968JixQecjxWHLqBCe+PX89Ws5GDldLjBdQVRphz+jpf0SCR6eo4Y1XKdYvzRnuzJ+hGcGduwAonOlAgD5lBk8K9BUyeLPZ2v1L7heHn9IKmEOLujBWYi4JiyQiQ/nwYhn2lWWOyUZEweKuvaox8sQTwV3fsA7V+f5imE3QjiMoHy4bg7wAzRSGo+5or8nhep8VB7TqpvVgpy1JWDrlqUAQtZUfW0APg3Zp+SLCOJfS+kMlr2ZIdmM3g4bi1P7nveSu+0K0n98R84rh/44ER9vhy50r/UdSls3Tb2r2O0Bfxefia4L/2RKMytJB2D+h+s/lVJckEsnKoKCyDmBKj/QRdWQoO5K8TZ+f21MZeoa+qlt3yHg1By3g8w+mnO7nMl+AzktK6AVhE3gDDSYcjSAuS4uBOBz6N29chctBTy9mMRaQzSp3dhno4DNHPJtA8JtXSAse2xP5DBflKxLXsbLU4P1j5bH2I/xbWSyfMFIhfKstGRyA86RU2pJy7BAUeRGlLuovW9Donk1sSOUXKZvEXjYnIF0aypS63tax1Gtr/JLu/0VXVqlV7exuvwGeI4AHLYChBibuhDMG5vk4TyXfcEX604NPwYouTr/VUxy5QHQd6a9VO1NF2r79ypYzTXet4/vh1zjzXa19KmS5D6/a+cIXOAbWgU0+I9LtMzL5bXIw35vY3x9S1kI+bvCfKoQJq7vCvdPT7T0ih0CtDW/JyiqY/Ou7Q9CIqFrWzTxf5U7diNKqulPj 8Qg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: A new mechanism, LUF(Lazy Unmap Flush), defers tlb flush until folios that have been unmapped and freed, eventually get allocated again. It's safe for folios that had been mapped read-only and were unmapped, since the contents of the folios don't change while staying in pcp or buddy so we can still read the data through the stale tlb entries. tlb flush can be defered when folios get unmapped as long as it guarantees to perform tlb flush needed, before the folios actually become used, of course, only if all the corresponding ptes don't have write permission. Otherwise, the system will get messed up. To achieve that: 1. For the folios that map only to non-writable tlb entries, prevent tlb flush during unmapping but perform it just before the folios actually become used, out of buddy or pcp. 2. When any non-writable ptes change to writable e.g. through fault handler, give up luf mechanism and perform tlb flush required right away. 3. When a writable mapping is created e.g. through mmap(), give up luf mechanism and perform tlb flush required right away. No matter what type of workload is used for performance evaluation, the result would be positive thanks to the unconditional reduction of tlb flushes, tlb misses and interrupts. For the test, I picked up one of the most popular and heavy workload, llama.cpp that is a LLM(Large Language Model) inference engine. The result would depend on memory latency and how often reclaim runs, which implies tlb miss overhead and how many times unmapping happens. In my system, the result shows: 1. tlb flushes are reduced about 95%. 2. tlb misses(itlb) are reduced about 80%. 3. tlb misses(dtlb store) are reduced about 57%. 4. tlb misses(dtlb load) are reduced about 24%. 5. tlb shootdown interrupts are reduced about 95%. 6. The test program runtime is reduced about 5%. The test environment and the result is like: Machine: bare metal, x86_64, Intel(R) Xeon(R) Gold 6430 CPU: 1 socket 64 core with hyper thread on Numa: 2 nodes (64 CPUs DRAM 42GB, no CPUs CXL expander 98GB) Config: swap off, numa balancing tiering on, demotion enabled The test set: llama.cpp/main -m $(70G_model1) -p "who are you?" -s 1 -t 15 -n 20 & llama.cpp/main -m $(70G_model2) -p "who are you?" -s 1 -t 15 -n 20 & llama.cpp/main -m $(70G_model3) -p "who are you?" -s 1 -t 15 -n 20 & wait where -t: nr of threads, -s: seed used to make the runtime stable, -n: nr of tokens that determines the runtime, -p: prompt to ask, -m: LLM model to use. Run the test set 10 times successively with caches dropped every run via 'echo 3 > /proc/sys/vm/drop_caches'. Each inference prints its runtime at the end of each. 1. Runtime from the output of llama.cpp: BEFORE ------ llama_print_timings: total time = 1002461.95 ms / 24 tokens llama_print_timings: total time = 1044978.38 ms / 24 tokens llama_print_timings: total time = 1000653.09 ms / 24 tokens llama_print_timings: total time = 1047104.80 ms / 24 tokens llama_print_timings: total time = 1069430.36 ms / 24 tokens llama_print_timings: total time = 1068201.16 ms / 24 tokens llama_print_timings: total time = 1078092.59 ms / 24 tokens llama_print_timings: total time = 1073200.45 ms / 24 tokens llama_print_timings: total time = 1067136.00 ms / 24 tokens llama_print_timings: total time = 1076442.56 ms / 24 tokens llama_print_timings: total time = 1004142.64 ms / 24 tokens llama_print_timings: total time = 1042942.65 ms / 24 tokens llama_print_timings: total time = 999933.76 ms / 24 tokens llama_print_timings: total time = 1046548.83 ms / 24 tokens llama_print_timings: total time = 1068671.48 ms / 24 tokens llama_print_timings: total time = 1068285.76 ms / 24 tokens llama_print_timings: total time = 1077789.63 ms / 24 tokens llama_print_timings: total time = 1071558.93 ms / 24 tokens llama_print_timings: total time = 1066181.55 ms / 24 tokens llama_print_timings: total time = 1076767.53 ms / 24 tokens llama_print_timings: total time = 1004065.63 ms / 24 tokens llama_print_timings: total time = 1044522.13 ms / 24 tokens llama_print_timings: total time = 999725.33 ms / 24 tokens llama_print_timings: total time = 1047510.77 ms / 24 tokens llama_print_timings: total time = 1068010.27 ms / 24 tokens llama_print_timings: total time = 1068999.31 ms / 24 tokens llama_print_timings: total time = 1077648.05 ms / 24 tokens llama_print_timings: total time = 1071378.96 ms / 24 tokens llama_print_timings: total time = 1066326.32 ms / 24 tokens llama_print_timings: total time = 1077088.92 ms / 24 tokens AFTER ----- llama_print_timings: total time = 988522.03 ms / 24 tokens llama_print_timings: total time = 997204.52 ms / 24 tokens llama_print_timings: total time = 996605.86 ms / 24 tokens llama_print_timings: total time = 991985.50 ms / 24 tokens llama_print_timings: total time = 1035143.31 ms / 24 tokens llama_print_timings: total time = 993660.18 ms / 24 tokens llama_print_timings: total time = 983082.14 ms / 24 tokens llama_print_timings: total time = 990431.36 ms / 24 tokens llama_print_timings: total time = 992707.09 ms / 24 tokens llama_print_timings: total time = 992673.27 ms / 24 tokens llama_print_timings: total time = 989285.43 ms / 24 tokens llama_print_timings: total time = 996710.06 ms / 24 tokens llama_print_timings: total time = 996534.64 ms / 24 tokens llama_print_timings: total time = 991344.17 ms / 24 tokens llama_print_timings: total time = 1035210.84 ms / 24 tokens llama_print_timings: total time = 994714.13 ms / 24 tokens llama_print_timings: total time = 984184.15 ms / 24 tokens llama_print_timings: total time = 990909.45 ms / 24 tokens llama_print_timings: total time = 991881.48 ms / 24 tokens llama_print_timings: total time = 993918.03 ms / 24 tokens llama_print_timings: total time = 990061.34 ms / 24 tokens llama_print_timings: total time = 998076.69 ms / 24 tokens llama_print_timings: total time = 997082.59 ms / 24 tokens llama_print_timings: total time = 990677.58 ms / 24 tokens llama_print_timings: total time = 1036054.94 ms / 24 tokens llama_print_timings: total time = 994125.93 ms / 24 tokens llama_print_timings: total time = 982467.01 ms / 24 tokens llama_print_timings: total time = 990191.60 ms / 24 tokens llama_print_timings: total time = 993319.24 ms / 24 tokens llama_print_timings: total time = 992540.57 ms / 24 tokens 2. tlb shootdowns from 'cat /proc/interrupts': BEFORE ------ TLB: 125553646 141418810 161932620 176853972 186655697 190399283 192143823 196414038 192872439 193313658 193395617 192521416 190788161 195067598 198016061 193607347 194293972 190786732 191545637 194856822 191801931 189634535 190399803 196365922 195268398 190115840 188050050 193194908 195317617 190820190 190164820 185556071 226797214 229592631 216112464 209909495 205575979 205950252 204948111 197999795 198892232 205287952 199344631 195015158 195869844 198858745 195692876 200961904 203463252 205921722 199850838 206145986 199613202 199961345 200129577 203020521 207873649 203697671 197093386 204243803 205993323 200934664 204193128 194435376 TLB shootdowns AFTER ----- TLB: 5648092 6610142 7032849 7882308 8088518 8352310 8656536 8705136 8647426 8905583 8985408 8704522 8884344 9026261 8929974 8869066 8877575 8810096 8770984 8754503 8801694 8865925 8787524 8656432 8755912 8682034 8773935 8832925 8797997 8515777 8481240 8891258 10595243 10285973 9756935 9573681 9398968 9069244 9242984 8899009 9310690 9029095 9069758 9105825 9092703 9270202 9460287 9258546 9180415 9232723 9270611 9175020 9490420 9360316 9420818 9057663 9525631 9310152 9152242 8654483 9181804 9050847 8919916 8883856 TLB shootdowns 3. tlb numbers from 'perf stat' per test set: BEFORE ------ 3163679332 dTLB-load-misses 2017751856 dTLB-store-misses 327092903 iTLB-load-misses 1357543886 tlb:tlb_flush AFTER ----- 2394694609 dTLB-load-misses 861144167 dTLB-store-misses 64055579 iTLB-load-misses 69175002 tlb:tlb_flush Signed-off-by: Byungchul Park --- include/linux/sched.h | 9 ++ mm/internal.h | 43 +++++- mm/memory.c | 8 ++ mm/mmap.c | 8 ++ mm/rmap.c | 308 +++++++++++++++++++++++++++++++++++++++++- 5 files changed, 366 insertions(+), 10 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 0915390b1b5e..6f83703ec284 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1340,8 +1340,17 @@ struct task_struct { struct tlbflush_unmap_batch tlb_ubc; struct tlbflush_unmap_batch tlb_ubc_ro; + struct tlbflush_unmap_batch tlb_ubc_luf; unsigned short int ugen; +#if defined(CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH) + /* + * whether all the mappings of a folio during unmap are read-only + * so that luf can work on the folio + */ + bool can_luf; +#endif + /* Cache last used pipe for splice(): */ struct pipe_inode_info *splice_pipe; diff --git a/mm/internal.h b/mm/internal.h index 805f0e6ecab4..2a44194f5d39 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1517,6 +1517,38 @@ void workingset_update_node(struct xa_node *node); extern struct list_lru shadow_nodes; #if defined(CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH) +unsigned short int try_to_unmap_luf(void); +void check_luf_flush(unsigned short int ugen); +void luf_flush(void); + +/* + * Reset the indicator indicating there are no writable mappings at the + * beginning of every rmap traverse for unmap. luf can work only when + * all the mappings are read-only. + */ +static inline void can_luf_init(void) +{ + current->can_luf = true; +} + +/* + * Mark the folio is not applicable to luf once it found a writble or + * dirty pte during rmap traverse for unmap. + */ +static inline void can_luf_fail(void) +{ + current->can_luf = false; +} + +/* + * Check if all the mappings are read-only and read-only mappings even + * exist. + */ +static inline bool can_luf_test(void) +{ + return current->can_luf && current->tlb_ubc_ro.flush_required; +} + static inline unsigned short int ugen_latest(unsigned short int a, unsigned short int b) { if (!a || !b) @@ -1546,10 +1578,7 @@ static inline unsigned short int hand_over_task_ugen(void) static inline void check_flush_task_ugen(void) { - /* - * XXX: luf mechanism will handle this. For now, do nothing but - * reset current's ugen to finalize this turn. - */ + check_luf_flush(current->ugen); current->ugen = 0; } @@ -1578,6 +1607,12 @@ static inline bool can_luf_folio(struct folio *f) return can_luf; } #else /* CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH */ +static inline unsigned short int try_to_unmap_luf(void) { return 0; } +static inline void check_luf_flush(unsigned short int ugen) {} +static inline void luf_flush(void) {} +static inline void can_luf_init(void) {} +static inline void can_luf_fail(void) {} +static inline bool can_luf_test(void) { return false; } static inline unsigned short int ugen_latest(unsigned short int a, unsigned short int b) { return 0; } static inline void update_task_ugen(unsigned short int ugen) {} static inline unsigned short int hand_over_task_ugen(void) { return 0; } diff --git a/mm/memory.c b/mm/memory.c index 33d87b64d15d..f218c275d307 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3617,6 +3617,14 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf) if (vmf->page) folio = page_folio(vmf->page); + /* + * The folio may or may not be one that is under luf's control + * and might be about to change its permission to writable. + * Conservatively give up deferring tlb flush just in case. + */ + if (folio) + luf_flush(); + /* * Shared mapping: we are guaranteed to have VM_WRITE and * FAULT_FLAG_WRITE set at this point. diff --git a/mm/mmap.c b/mm/mmap.c index 47363e7f7ea2..3b3bece4b079 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1271,6 +1271,14 @@ unsigned long do_mmap(struct file *file, unsigned long addr, pkey = 0; } + /* + * This mmap may or may not be mapping to ones that is under + * luf's control. However, conservatively give up deferring tlb + * flush just in case. + */ + if (prot & PROT_WRITE) + luf_flush(); + /* Do simple checking here so the lower-level routines won't have * to. we assume access permissions have been handled by the open * of the memory object, so we don't do any here. diff --git a/mm/rmap.c b/mm/rmap.c index 328b5e2217e6..e42783c02114 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -635,6 +635,270 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio, } #ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH +static struct tlbflush_unmap_batch luf_ubc; +static DEFINE_SPINLOCK(luf_lock); + +/* + * Don't be zero to distinguish from invalid ugen, 0. + */ +static unsigned short int ugen_next(unsigned short int a) +{ + return a + 1 ?: a + 2; +} + +static bool ugen_before(unsigned short int a, unsigned short int b) +{ + return (short int)(a - b) < 0; +} + +/* + * Need to synchronize between tlb flush and managing pending CPUs in + * luf_ubc. Take a look at the following scenario, where CPU0 is in + * try_to_unmap_flush() and CPU1 is in migrate_pages_batch(): + * + * CPU0 CPU1 + * ---- ---- + * tlb flush + * unmap folios (needing tlb flush) + * add pending CPUs to luf_ubc + * <-- not performed tlb flush needed by + * the unmap above yet but the request + * will be cleared by CPU0 shortly. bug! + * clear the CPUs from luf_ubc + * + * The pending CPUs added in CPU1 should not be cleared from luf_ubc + * in CPU0 because the tlb flush for luf_ubc added in CPU1 has not + * been performed this turn. To avoid this, using 'on_flushing' + * variable, prevent adding pending CPUs to luf_ubc and give up luf + * mechanism if someone is in the middle of tlb flush, like: + * + * CPU0 CPU1 + * ---- ---- + * on_flushing++ + * tlb flush + * unmap folios (needing tlb flush) + * if on_flushing == 0: + * add pending CPUs to luf_ubc + * else: <-- hit + * give up luf mechanism + * clear the CPUs from luf_ubc + * on_flushing-- + * + * Only the following case would be allowed for luf mechanism to work: + * + * CPU0 CPU1 + * ---- ---- + * unmap folios (needing tlb flush) + * if on_flushing == 0: <-- hit + * add pending CPUs to luf_ubc + * else: + * give up luf mechanism + * on_flushing++ + * tlb flush + * clear the CPUs from luf_ubc + * on_flushing-- + */ +static int on_flushing; + +/* + * When more than one thread enter check_luf_flush() at the same + * time, each should wait for the request on progress to be done to + * avoid the following scenario, where the both CPUs are in + * check_luf_flush(): + * + * CPU0 CPU1 + * ---- ---- + * if !luf_ubc.flush_required: + * return + * luf_ubc.flush_required = false + * if !luf_ubc.flush_requied: <-- hit + * return <-- not performed tlb flush + * needed yet but return. bug! + * luf_ubc.flush_required = false + * try_to_unmap_flush() + * finalize + * try_to_unmap_flush() <-- performs tlb flush needed + * finalize + * + * So it should be handled: + * + * CPU0 CPU1 + * ---- ---- + * atomically execute { + * if luf_on_flushing: + * wait for the completion + * return + * if !luf_ubc.flush_required: + * return + * luf_ubc.flush_required = false + * luf_on_flushing = true + * } + * atomically execute { + * if luf_on_flushing: <-- hit + * wait for the completion + * return <-- tlb flush needed is done + * if !luf_ubc.flush_requied: + * return + * luf_ubc.flush_required = false + * luf_on_flushing = true + * } + * + * try_to_unmap_flush() + * luf_on_flushing = false + * finalize + * try_to_unmap_flush() <-- performs tlb flush needed + * luf_on_flushing = false + * finalize + */ +static bool luf_on_flushing; + +/* + * Generation number for the current request of deferred tlb flush. + */ +static unsigned short int luf_gen; + +/* + * Generation number for the next request. + */ +static unsigned short int luf_gen_next = 1; + +/* + * Generation number for the latest request handled. + */ +static unsigned short int luf_gen_done; + +unsigned short int try_to_unmap_luf(void) +{ + struct tlbflush_unmap_batch *tlb_ubc = ¤t->tlb_ubc; + struct tlbflush_unmap_batch *tlb_ubc_luf = ¤t->tlb_ubc_luf; + unsigned long flags; + unsigned short int ugen; + + if (!spin_trylock_irqsave(&luf_lock, flags)) { + /* + * Give up luf mechanism. Just let tlb flush needed + * handled by try_to_unmap_flush() at the caller side. + */ + fold_ubc(tlb_ubc, tlb_ubc_luf); + return 0; + } + + if (on_flushing || luf_on_flushing) { + spin_unlock_irqrestore(&luf_lock, flags); + + /* + * Give up luf mechanism. Just let tlb flush needed + * handled by try_to_unmap_flush() at the caller side. + */ + fold_ubc(tlb_ubc, tlb_ubc_luf); + return 0; + } + + fold_ubc(&luf_ubc, tlb_ubc_luf); + ugen = luf_gen = luf_gen_next; + spin_unlock_irqrestore(&luf_lock, flags); + + return ugen; +} + +static void rmap_flush_start(void) +{ + unsigned long flags; + + spin_lock_irqsave(&luf_lock, flags); + on_flushing++; + spin_unlock_irqrestore(&luf_lock, flags); +} + +static void rmap_flush_end(struct tlbflush_unmap_batch *batch) +{ + unsigned long flags; + + spin_lock_irqsave(&luf_lock, flags); + if (arch_tlbbatch_done(&luf_ubc.arch, &batch->arch)) { + luf_ubc.flush_required = false; + luf_ubc.writable = false; + } + on_flushing--; + spin_unlock_irqrestore(&luf_lock, flags); +} + +/* + * It must be guaranteed to have completed tlb flush requested on return. + */ +void check_luf_flush(unsigned short int ugen) +{ + struct tlbflush_unmap_batch *tlb_ubc = ¤t->tlb_ubc; + unsigned long flags; + + /* + * Nothing has been requested. We are done. + */ + if (!ugen) + return; +retry: + /* + * We can see a larger value than or equal to luf_gen_done, + * which means the tlb flush we need has been done. + */ + if (!ugen_before(READ_ONCE(luf_gen_done), ugen)) + return; + + spin_lock_irqsave(&luf_lock, flags); + + /* + * With luf_lock held, we might read luf_gen_done updated. + */ + if (ugen_next(luf_gen_done) != ugen) { + spin_unlock_irqrestore(&luf_lock, flags); + return; + } + + /* + * Others are already working for us. + */ + if (luf_on_flushing) { + spin_unlock_irqrestore(&luf_lock, flags); + goto retry; + } + + if (!luf_ubc.flush_required) { + spin_unlock_irqrestore(&luf_lock, flags); + return; + } + + fold_ubc(tlb_ubc, &luf_ubc); + luf_gen_next = ugen_next(luf_gen); + luf_on_flushing = true; + spin_unlock_irqrestore(&luf_lock, flags); + + try_to_unmap_flush(); + + spin_lock_irqsave(&luf_lock, flags); + luf_on_flushing = false; + + /* + * luf_gen_done can be read by another with luf_lock not + * held so use WRITE_ONCE() to prevent tearing. + */ + WRITE_ONCE(luf_gen_done, ugen); + spin_unlock_irqrestore(&luf_lock, flags); +} + +void luf_flush(void) +{ + unsigned long flags; + unsigned short int ugen; + + /* + * Obtain the latest ugen number. + */ + spin_lock_irqsave(&luf_lock, flags); + ugen = luf_gen; + spin_unlock_irqrestore(&luf_lock, flags); + + check_luf_flush(ugen); +} void fold_ubc(struct tlbflush_unmap_batch *dst, struct tlbflush_unmap_batch *src) @@ -666,13 +930,15 @@ void fold_ubc(struct tlbflush_unmap_batch *dst, void try_to_unmap_flush(void) { struct tlbflush_unmap_batch *tlb_ubc = ¤t->tlb_ubc; - struct tlbflush_unmap_batch *tlb_ubc_ro = ¤t->tlb_ubc_ro; + struct tlbflush_unmap_batch *tlb_ubc_luf = ¤t->tlb_ubc_luf; - fold_ubc(tlb_ubc, tlb_ubc_ro); + fold_ubc(tlb_ubc, tlb_ubc_luf); if (!tlb_ubc->flush_required) return; + rmap_flush_start(); arch_tlbbatch_flush(&tlb_ubc->arch); + rmap_flush_end(tlb_ubc); arch_tlbbatch_clear(&tlb_ubc->arch); tlb_ubc->flush_required = false; tlb_ubc->writable = false; @@ -682,9 +948,9 @@ void try_to_unmap_flush(void) void try_to_unmap_flush_dirty(void) { struct tlbflush_unmap_batch *tlb_ubc = ¤t->tlb_ubc; - struct tlbflush_unmap_batch *tlb_ubc_ro = ¤t->tlb_ubc_ro; + struct tlbflush_unmap_batch *tlb_ubc_luf = ¤t->tlb_ubc_luf; - if (tlb_ubc->writable || tlb_ubc_ro->writable) + if (tlb_ubc->writable || tlb_ubc_luf->writable) try_to_unmap_flush(); } @@ -708,9 +974,15 @@ static void set_tlb_ubc_flush_pending(struct mm_struct *mm, pte_t pteval, if (!pte_accessible(mm, pteval)) return; - if (pte_write(pteval)) + if (pte_write(pteval)) { tlb_ubc = ¤t->tlb_ubc; - else + + /* + * luf cannot work with the folio once it found a + * writable or dirty mapping on it. + */ + can_luf_fail(); + } else tlb_ubc = ¤t->tlb_ubc_ro; arch_tlbbatch_add_pending(&tlb_ubc->arch, mm, uaddr); @@ -1976,11 +2248,23 @@ void try_to_unmap(struct folio *folio, enum ttu_flags flags) .done = folio_not_mapped, .anon_lock = folio_lock_anon_vma_read, }; + struct tlbflush_unmap_batch *tlb_ubc = ¤t->tlb_ubc; + struct tlbflush_unmap_batch *tlb_ubc_ro = ¤t->tlb_ubc_ro; + struct tlbflush_unmap_batch *tlb_ubc_luf = ¤t->tlb_ubc_luf; + bool can_luf; + + can_luf_init(); if (flags & TTU_RMAP_LOCKED) rmap_walk_locked(folio, &rwc); else rmap_walk(folio, &rwc); + + can_luf = can_luf_folio(folio) && can_luf_test(); + if (can_luf) + fold_ubc(tlb_ubc_luf, tlb_ubc_ro); + else + fold_ubc(tlb_ubc, tlb_ubc_ro); } /* @@ -2325,6 +2609,10 @@ void try_to_migrate(struct folio *folio, enum ttu_flags flags) .done = folio_not_mapped, .anon_lock = folio_lock_anon_vma_read, }; + struct tlbflush_unmap_batch *tlb_ubc = ¤t->tlb_ubc; + struct tlbflush_unmap_batch *tlb_ubc_ro = ¤t->tlb_ubc_ro; + struct tlbflush_unmap_batch *tlb_ubc_luf = ¤t->tlb_ubc_luf; + bool can_luf; /* * Migration always ignores mlock and only supports TTU_RMAP_LOCKED and @@ -2349,10 +2637,18 @@ void try_to_migrate(struct folio *folio, enum ttu_flags flags) if (!folio_test_ksm(folio) && folio_test_anon(folio)) rwc.invalid_vma = invalid_migration_vma; + can_luf_init(); + if (flags & TTU_RMAP_LOCKED) rmap_walk_locked(folio, &rwc); else rmap_walk(folio, &rwc); + + can_luf = can_luf_folio(folio) && can_luf_test(); + if (can_luf) + fold_ubc(tlb_ubc_luf, tlb_ubc_ro); + else + fold_ubc(tlb_ubc, tlb_ubc_ro); } #ifdef CONFIG_DEVICE_PRIVATE From patchwork Fri May 10 06:52:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13660941 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63DF9C25B10 for ; Fri, 10 May 2024 06:52:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8AAF36B00A3; Fri, 10 May 2024 02:52:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 808F46B00A0; Fri, 10 May 2024 02:52:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 633B26B00A2; Fri, 10 May 2024 02:52:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 367276B009F for ; Fri, 10 May 2024 02:52:29 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id E1719121762 for ; Fri, 10 May 2024 06:52:28 +0000 (UTC) X-FDA: 82101567576.02.C7F70A2 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by imf10.hostedemail.com (Postfix) with ESMTP id 0AF8EC0011 for ; Fri, 10 May 2024 06:52:26 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf10.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1715323947; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references; bh=u2mjUnFQvLeiyt0EIbd9RAlKFlEDSFVl1qIOuPU7u6k=; b=T+nc3CHeltgxJBrFyG5TAC6zsEGdXThFSvsZK2VpUG+0f7YZYSkGcH0zk9/OGNQRkh8s2o iYkUkL/nc6pvea2WXCKfGMunu+dIPMVGwgE3IE9VoY/+9PrNkVrzVZO97IiJ+GhUUBCCxx T8rwrpwJrSy+gQxZwVxUt0XuaR5GYUY= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf10.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1715323947; a=rsa-sha256; cv=none; b=WZm77+vSHOLUBBU7ei/O3CCLBZYxF625pFZOjrgZQsKWyUHDnWwftZGjnEyLqiS1OEQnD8 h4b3f/oufoSmkH90/c7BY6KSQmrc3zdNWYWSI7cRc+Qpbl2z2YsRHE/KAOWFijynqyK+1x adR7P070txjq1OGV/8CDJApoiSQv/cU= X-AuditID: a67dfc5b-d6dff70000001748-f1-663dc4215f8d From: Byungchul Park To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: kernel_team@skhynix.com, akpm@linux-foundation.org, ying.huang@intel.com, vernhao@tencent.com, mgorman@techsingularity.net, hughd@google.com, willy@infradead.org, david@redhat.com, peterz@infradead.org, luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, rjgolo@gmail.com Subject: [PATCH v10 10/12] mm: separate move/undo parts from migrate_pages_batch() Date: Fri, 10 May 2024 15:52:04 +0900 Message-Id: <20240510065206.76078-11-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240510065206.76078-1-byungchul@sk.com> References: <20240510065206.76078-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrCLMWRmVeSWpSXmKPExsXC9ZZnka7iEds0gz83jS3mrF/DZvF5wz82 ixcb2hktvq7/xWzx9FMfi8XlXXPYLO6t+c9qcX7XWlaLHUv3MVlcOrCAyeJ47wEmi/n3PrNZ bN40ldni+JSpjBa/fwAVn5w1mcVBwON7ax+Lx85Zd9k9Fmwq9di8Qstj8Z6XTB6bVnWyeWz6 NInd4925c+weJ2b8ZvGYdzLQ4/2+q2weW3/ZeTROvcbm8XmTXABfFJdNSmpOZllqkb5dAlfG lknLWQp+alfcvreatYFxunIXIweHhICJxINT5V2MnGBmx/nZLCA2m4C6xI0bP5lBbBEBM4mD rX/YQWxmgbtMEgf62UBsYYEgicPLjzOBjGERUJVo+l8EEuYFKv97ejMTxEh5idUbDoCN4QSK f1g2AywuJGAq0fFgImMXIxdQzWc2iQMP90I1SEocXHGDZQIj7wJGhlWMQpl5ZbmJmTkmehmV eZkVesn5uZsYgWG/rPZP9A7GTxeCDzEKcDAq8fDu2GyTJsSaWFZcmXuIUYKDWUmEt6rGOk2I NyWxsiq1KD++qDQntfgQozQHi5I4r9G38hQhgfTEktTs1NSC1CKYLBMHp1QDo4bVwppI6RW+ 37pd7CZPm5D7fE+9e4FG7r4tp/ae4jsY71Vz7/GaOStLOOqvTWBmXP5c8ILxrbspouYKHXrB 2k/n9tmddFecJx/69+aMub7zxIOOba25ZPcqumZHzMldl8JtCx46t5r2mfq8OLDhVmubar2k /qxy9tkWzOuvrPJnCymf1eWnr8RSnJFoqMVcVJwIADV40Ud3AgAA X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrDLMWRmVeSWpSXmKPExsXC5WfdrKt4xDbN4MYuTYs569ewWXze8I/N 4sWGdkaLr+t/MVs8/dTHYnF47klWi8u75rBZ3Fvzn9Xi/K61rBY7lu5jsrh0YAGTxfHeA0wW 8+99ZrPYvGkqs8XxKVMZLX7/ACo+OWsyi4Ogx/fWPhaPnbPusnss2FTqsXmFlsfiPS+ZPDat 6mTz2PRpErvHu3Pn2D1OzPjN4jHvZKDH+31X2TwWv/jA5LH1l51H49RrbB6fN8kF8Edx2aSk 5mSWpRbp2yVwZWyZtJyl4Kd2xe17q1kbGKcrdzFyckgImEh0nJ/NAmKzCahL3LjxkxnEFhEw kzjY+ocdxGYWuMskcaCfDcQWFgiSOLz8OFMXIwcHi4CqRNP/IpAwL1D539ObmSBGykus3nAA bAwnUPzDshlgcSEBU4mOBxMZJzByLWBkWMUokplXlpuYmWOqV5ydUZmXWaGXnJ+7iREYxstq /0zcwfjlsvshRgEORiUe3h2bbdKEWBPLiitzDzFKcDArifBW1VinCfGmJFZWpRblxxeV5qQW H2KU5mBREuf1Ck9NEBJITyxJzU5NLUgtgskycXBKNTBKOWfcnmPxb/6zrp0OdqXrHZ9vUmg2 6GbZuoLdt/PNh2DHnDsFPj1Kp6bX3cu9e71uj/e60JeLUm6EWV8Ui3nvoqg4w2P9NIfjSTUp 2d3WXzaWejw/fFb5fFTZ35v6t59IxfKz/r7X4jSjUprB4d31iRW8q39YXT8Y4eajnTU3xOm/ oZX1HQUlluKMREMt5qLiRACpUG94XwIAAA== X-CFilter-Loop: Reflected X-Stat-Signature: 6667mu3nu1e3hey4wcxetar784qnhxb9 X-Rspamd-Queue-Id: 0AF8EC0011 X-Rspam-User: X-Rspamd-Server: rspam12 X-HE-Tag: 1715323946-972483 X-HE-Meta: U2FsdGVkX19eKci7tcZujEmT2CWW/0CmWGmlcYEg6bVnNWRVGdiMxYhysaldi0RA0AtayFkEdbcnRz6AZeue+ZL27MgN2PAvFKdT3vem7FsU7NmH3gX0wDi4IacUsMDhYB6rKkEQ5GxYlVcZLKZljBwMqZE+Q+Bi72hEOkP9Ve0vxkALmgvvkvpfQoBK6/bnlJTjWEqYACDsQmHTTcUOWfI56fxU02vshhvHsDpg5+lSgaNpLdJYcNzZmRr/gVjbrduJ64sxFZNsjma9nTUDNN20cbWPdZMmdyWNZxSdgJpukq9ZaI/ke++VDPfN36lyFxReEECs0ZjKBupeE7EMCUlEaQuf3ou2cKwiQ2eQTnASR+oeZml0QFDZmmWuRXxNimY+5hZFf+veHjFh4zKRAuuuTF6KUBHtL0/QsYTu6PTWM1oPrzBJgfUpKILz92zaDGRvnjOYFEciji/h/1C8ShS5xbDa/FThEIgMZ6wloSfCEXw1Eq0TGy3pBWAg9tKTqCk28EUvE4h7TVvIYleJEq5px/mT8eF1ov1BaPujoE8E375acZsVgfjBLQoT1NlADCy4H8ZijRssfGqjC5thx6exkeGKwtMvP/fNJaWag2ddYJcyheVdXA1VW0Jyve+kWGfcoUrP7dkgpigcg2Jsw2aFvm3m1w6cn1veFtOBqqUDnwI1gETWMBDANT3TkWe2l7zEY0cn0ZSMRNxDIKnCvtBF/P7GQzuJu3aWAnVB6jJ2y6HJk0nEPXKN29GsUnkdoXEy7gqJLWGbuWQet/MBNzs6sQxO9KmSUXLnV63quT71wtZOiP3rRPkwMNb763HpjQELVmFKKCcqRFR08w96PGr7zNgXj9L7S0k9eQSoagwERn4iHu9o4GhIe9kq4SiymZArOcYY6kaCacV26uo08LpcL81u7iu+AGSX//a5TAf/Y5Kg6+GDMN7Zr4S8GIzuVzjDLc/FuovjqCDCXhr plbIpdiH 9gQWaE9PdguorhcT4vEpYioTy21PK2NxbbNfT X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Functionally, no change. This is a preparation for luf mechanism that requires to use separated folio lists for its own handling during migration. Refactored migrate_pages_batch() so as to separate move/undo parts from migrate_pages_batch(). Signed-off-by: Byungchul Park --- mm/migrate.c | 134 +++++++++++++++++++++++++++++++-------------------- 1 file changed, 83 insertions(+), 51 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index c7692f303fa7..f9ed7a2b8720 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1609,6 +1609,81 @@ static int migrate_hugetlbs(struct list_head *from, new_folio_t get_new_folio, return nr_failed; } +static void migrate_folios_move(struct list_head *src_folios, + struct list_head *dst_folios, + free_folio_t put_new_folio, unsigned long private, + enum migrate_mode mode, int reason, + struct list_head *ret_folios, + struct migrate_pages_stats *stats, + int *retry, int *thp_retry, int *nr_failed, + int *nr_retry_pages) +{ + struct folio *folio, *folio2, *dst, *dst2; + bool is_thp; + int nr_pages; + int rc; + + dst = list_first_entry(dst_folios, struct folio, lru); + dst2 = list_next_entry(dst, lru); + list_for_each_entry_safe(folio, folio2, src_folios, lru) { + is_thp = folio_test_large(folio) && folio_test_pmd_mappable(folio); + nr_pages = folio_nr_pages(folio); + + cond_resched(); + + rc = migrate_folio_move(put_new_folio, private, + folio, dst, mode, + reason, ret_folios); + /* + * The rules are: + * Success: folio will be freed + * -EAGAIN: stay on the unmap_folios list + * Other errno: put on ret_folios list + */ + switch(rc) { + case -EAGAIN: + *retry += 1; + *thp_retry += is_thp; + *nr_retry_pages += nr_pages; + break; + case MIGRATEPAGE_SUCCESS: + stats->nr_succeeded += nr_pages; + stats->nr_thp_succeeded += is_thp; + break; + default: + *nr_failed += 1; + stats->nr_thp_failed += is_thp; + stats->nr_failed_pages += nr_pages; + break; + } + dst = dst2; + dst2 = list_next_entry(dst, lru); + } +} + +static void migrate_folios_undo(struct list_head *src_folios, + struct list_head *dst_folios, + free_folio_t put_new_folio, unsigned long private, + struct list_head *ret_folios) +{ + struct folio *folio, *folio2, *dst, *dst2; + + dst = list_first_entry(dst_folios, struct folio, lru); + dst2 = list_next_entry(dst, lru); + list_for_each_entry_safe(folio, folio2, src_folios, lru) { + int old_page_state = 0; + struct anon_vma *anon_vma = NULL; + + __migrate_folio_extract(dst, &old_page_state, &anon_vma); + migrate_folio_undo_src(folio, old_page_state & PAGE_WAS_MAPPED, + anon_vma, true, ret_folios); + list_del(&dst->lru); + migrate_folio_undo_dst(dst, true, put_new_folio, private); + dst = dst2; + dst2 = list_next_entry(dst, lru); + } +} + /* * migrate_pages_batch() first unmaps folios in the from list as many as * possible, then move the unmapped folios. @@ -1631,7 +1706,7 @@ static int migrate_pages_batch(struct list_head *from, int pass = 0; bool is_thp = false; bool is_large = false; - struct folio *folio, *folio2, *dst = NULL, *dst2; + struct folio *folio, *folio2, *dst = NULL; int rc, rc_saved = 0, nr_pages; LIST_HEAD(unmap_folios); LIST_HEAD(dst_folios); @@ -1790,42 +1865,11 @@ static int migrate_pages_batch(struct list_head *from, thp_retry = 0; nr_retry_pages = 0; - dst = list_first_entry(&dst_folios, struct folio, lru); - dst2 = list_next_entry(dst, lru); - list_for_each_entry_safe(folio, folio2, &unmap_folios, lru) { - is_thp = folio_test_large(folio) && folio_test_pmd_mappable(folio); - nr_pages = folio_nr_pages(folio); - - cond_resched(); - - rc = migrate_folio_move(put_new_folio, private, - folio, dst, mode, - reason, ret_folios); - /* - * The rules are: - * Success: folio will be freed - * -EAGAIN: stay on the unmap_folios list - * Other errno: put on ret_folios list - */ - switch(rc) { - case -EAGAIN: - retry++; - thp_retry += is_thp; - nr_retry_pages += nr_pages; - break; - case MIGRATEPAGE_SUCCESS: - stats->nr_succeeded += nr_pages; - stats->nr_thp_succeeded += is_thp; - break; - default: - nr_failed++; - stats->nr_thp_failed += is_thp; - stats->nr_failed_pages += nr_pages; - break; - } - dst = dst2; - dst2 = list_next_entry(dst, lru); - } + /* Move the unmapped folios */ + migrate_folios_move(&unmap_folios, &dst_folios, + put_new_folio, private, mode, reason, + ret_folios, stats, &retry, &thp_retry, + &nr_failed, &nr_retry_pages); } nr_failed += retry; stats->nr_thp_failed += thp_retry; @@ -1834,20 +1878,8 @@ static int migrate_pages_batch(struct list_head *from, rc = rc_saved ? : nr_failed; out: /* Cleanup remaining folios */ - dst = list_first_entry(&dst_folios, struct folio, lru); - dst2 = list_next_entry(dst, lru); - list_for_each_entry_safe(folio, folio2, &unmap_folios, lru) { - int old_page_state = 0; - struct anon_vma *anon_vma = NULL; - - __migrate_folio_extract(dst, &old_page_state, &anon_vma); - migrate_folio_undo_src(folio, old_page_state & PAGE_WAS_MAPPED, - anon_vma, true, ret_folios); - list_del(&dst->lru); - migrate_folio_undo_dst(dst, true, put_new_folio, private); - dst = dst2; - dst2 = list_next_entry(dst, lru); - } + migrate_folios_undo(&unmap_folios, &dst_folios, + put_new_folio, private, ret_folios); return rc; } From patchwork Fri May 10 06:52:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13660943 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C5A4DC25B4F for ; Fri, 10 May 2024 06:52:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 60EF56B00A0; Fri, 10 May 2024 02:52:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5BE366B00A1; Fri, 10 May 2024 02:52:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3EBE76B00A2; Fri, 10 May 2024 02:52:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 232436B00A0 for ; Fri, 10 May 2024 02:52:30 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 9AEA3416A9 for ; Fri, 10 May 2024 06:52:29 +0000 (UTC) X-FDA: 82101567618.01.AC1D156 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by imf17.hostedemail.com (Postfix) with ESMTP id 936DE40004 for ; Fri, 10 May 2024 06:52:27 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf17.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1715323947; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references; bh=ECjT2gPRLy0NnXoAXwZXSHAZTvCcvFYKGTfTRXwtQs4=; b=B4BUrrTmLdqp+FdQR0lMY+i8uGytOoZHyUC0If4FLIxlElBTUz4JliLbdNspWRWZ3fyUF/ Kod3uujb1Zu/cHm1V+lRugo0FbEpxP1q5kmtvCCLuuMovfrOoZ0/IRvO62KTr1hfSBFMw2 JwZWr1mx/O7fOpYMQ8zAD5zwfM7lSrg= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf17.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1715323948; a=rsa-sha256; cv=none; b=wflzoJiBru43EKEBSuEN0Kgly9Ee+piAJFRSOB9H7B2X5eNAk9zMbEp9LYt42mrdsphPfB PXBIAOeiDYNu2SFnU3XDtOtxKfgmyQnDJ5zHv4Ti2tJQcxlEDA5UPneHItuRqwZidcrWC9 Iw+YWAGLmwKNJl6hjK1TU5Ytw6j6P14= X-AuditID: a67dfc5b-d6dff70000001748-f6-663dc4215938 From: Byungchul Park To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: kernel_team@skhynix.com, akpm@linux-foundation.org, ying.huang@intel.com, vernhao@tencent.com, mgorman@techsingularity.net, hughd@google.com, willy@infradead.org, david@redhat.com, peterz@infradead.org, luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, rjgolo@gmail.com Subject: [PATCH v10 11/12] mm, migrate: apply luf mechanism to unmapping during migration Date: Fri, 10 May 2024 15:52:05 +0900 Message-Id: <20240510065206.76078-12-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240510065206.76078-1-byungchul@sk.com> References: <20240510065206.76078-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrKLMWRmVeSWpSXmKPExsXC9ZZnka7iEds0gwObzSzmrF/DZvF5wz82 ixcb2hktvq7/xWzx9FMfi8XlXXPYLO6t+c9qcX7XWlaLHUv3MVlcOrCAyeJ47wEmi/n3PrNZ bN40ldni+JSpjBa/fwAVn5w1mcVBwON7ax+Lx85Zd9k9Fmwq9di8Qstj8Z6XTB6bVnWyeWz6 NInd4925c+weJ2b8ZvGYdzLQ4/2+q2weW3/ZeTROvcbm8XmTXABfFJdNSmpOZllqkb5dAlfG 232XWAqa3SpWnV/M3sC40KKLkZNDQsBEovtWCyuMPXXrJUYQm01AXeLGjZ/MILaIgJnEwdY/ 7CA2s8BdJokD/WwgtrBApMSbA8/A6lkEVCV+XZwONIeDgxeofm2rMMRIeYnVGw6AjeEECn9Y NoMJxBYSMJXoeDARqJULqOYzm8TdB4vYIBokJQ6uuMEygZF3ASPDKkahzLyy3MTMHBO9jMq8 zAq95PzcTYzAwF9W+yd6B+OnC8GHGAU4GJV4eHdstkkTYk0sK67MPcQowcGsJMJbVWOdJsSb klhZlVqUH19UmpNafIhRmoNFSZzX6Ft5ipBAemJJanZqakFqEUyWiYNTqoFR5/l9iUdWxT/i 2A7eWSst99t+jfraRYxyenWBupxG2anTN7EtvM3iFWHqxO+gZGTyw9B94j+pqpUP/tWsVNZT rInZmV2s/61F39TS/I3oWs5DD4+sClfi3qUkcbOfZY3euYwjqVaysVfK723WZ1Up3RlUvCVR tuzV3oJF4Sqm2leERbkP6ymxFGckGmoxFxUnAgCUIreoeAIAAA== X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrDLMWRmVeSWpSXmKPExsXC5WfdrKt4xDbNYNUOHYs569ewWXze8I/N 4sWGdkaLr+t/MVs8/dTHYnF47klWi8u75rBZ3Fvzn9Xi/K61rBY7lu5jsrh0YAGTxfHeA0wW 8+99ZrPYvGkqs8XxKVMZLX7/ACo+OWsyi4Ogx/fWPhaPnbPusnss2FTqsXmFlsfiPS+ZPDat 6mTz2PRpErvHu3Pn2D1OzPjN4jHvZKDH+31X2TwWv/jA5LH1l51H49RrbB6fN8kF8Edx2aSk 5mSWpRbp2yVwZbzdd4mloNmtYtX5xewNjAstuhg5OSQETCSmbr3ECGKzCahL3LjxkxnEFhEw kzjY+ocdxGYWuMskcaCfDcQWFoiUeHPgGVg9i4CqxK+L01m7GDk4eIHq17YKQ4yUl1i94QDY GE6g8IdlM5hAbCEBU4mOBxMZJzByLWBkWMUokplXlpuYmWOqV5ydUZmXWaGXnJ+7iREYxstq /0zcwfjlsvshRgEORiUe3h2bbdKEWBPLiitzDzFKcDArifBW1VinCfGmJFZWpRblxxeV5qQW H2KU5mBREuf1Ck9NEBJITyxJzU5NLUgtgskycXBKNTDyGD78UXVh/tLXBtN9lZcrz87dcWz6 fjlu+a9fGKbYZ6vvieOtd564+PMzlaWVPxfx+i1wXb/9FMemc772q6wWFn3OFb2SY6ciJrhS P+yORN+5rsolF7hL1d6nL961V7Bhr/EBhf0WqQtKZDVjk+X2Bh/08mxrljM2a5q906Yw+9ZU yxq95EYlluKMREMt5qLiRAAmg/+iXwIAAA== X-CFilter-Loop: Reflected X-Rspamd-Server: rspam01 X-Stat-Signature: fhiqtgphgjuz8x4wcen9is8tex8gni8n X-Rspam-User: X-Rspamd-Queue-Id: 936DE40004 X-HE-Tag: 1715323947-989240 X-HE-Meta: U2FsdGVkX19xrCTKosK7EeoFB3I1dd5IZf/umzVdeZZJrbl5b/ahfvdtrYdCHYsZYEXfxoDL0ff/WYfymirMd+BDMaXFmsV+YY7YxUHqi7rx7VH/JiVWKps4uUf8hadq8F44ZvmkL4X+2tEA00TaZXl87AMElHM2QlSXRScdPhHfeo8Cgw0masSdNg0NRWSAoJVGh140VsPIx/dhG+UcTbXh9KLyS3UpA+XIFZW+xMos+s+XDtEDpdzzGgh8J0498saUKPncE1KBsTCmsD3M3lrXrp1GkxVAfUtDcHoYuq/Vrforae6LYS+8nTyBmby7UKhtErHAtv738Lsb/emEGZX3qErRvpCt346TYQ2ZQ1EhseXulFRJ6Zd50m8oteqJhROTBTduH/vkz/v2wD7edspg8ietttNhFEGYbs88VBoBwY2Nnf7q4GCO3+ip/7at88GRvLuFGGkh/0zsCu43FtqNU9W4/LB6AKn7JFh/y9uVRHSxwc78yWDfE1bfUgYm7xNM6KkLdT5RcRKQWBScf+CBZAlt5ndbFCLLVgn9w+Khx45LUNNUkKgFB+IvEFiBKy/cXz1yjrDvKdiiM2Pr7aEuYFvy0rYeOwcXq1cSA/Oy6nBTCZTJarqYwJoMvPf6EmEh4tnj9oOq9Yc/eQ9TWzF/8/6MVO9QnY6PgEZlf4zii1Z+/IqRaUQFjQ8LooKIg1cL7okghyQ0Ob//WR6MmQNxhwdcDxPjXfD7q3Jxu6YRPYyx/wY0pvvossGOgsXOluedZb8PsAIJ5mn3PI/KOeOmbTvN2fU92WLr8ZQdUDnNmkzf4WTRb7fuI26VMUadUSMrDhkF1/IN7QEvKxe2Lklxo3nKIWqKkyzy5MKiVFcD8DE21JowJXAxwkOTELbZJBIiN3kc7osXRbBo5icKhd0RhNhHzeVEIAESINTCdvBJ12Zlf9bxQ9VCRXOGB8dTu/FgOf/Y318oVZd3rVu 9jA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: A new mechanism, LUF(Lazy Unmap Flush), defers tlb flush until folios that have been unmapped and freed, eventually get allocated again. It's safe for folios that had been mapped read only and were unmapped, since the contents of the folios don't change while staying in pcp or buddy so we can still read the data through the stale tlb entries. Applied the mechanism to unmapping during migration. Signed-off-by: Byungchul Park --- include/linux/rmap.h | 2 +- mm/migrate.c | 56 ++++++++++++++++++++++++++++++++------------ mm/rmap.c | 9 ++++--- 3 files changed, 48 insertions(+), 19 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 0f906dc6d280..1898a2c1c087 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -657,7 +657,7 @@ static inline int folio_try_share_anon_rmap_pmd(struct folio *folio, int folio_referenced(struct folio *, int is_locked, struct mem_cgroup *memcg, unsigned long *vm_flags); -void try_to_migrate(struct folio *folio, enum ttu_flags flags); +bool try_to_migrate(struct folio *folio, enum ttu_flags flags); void try_to_unmap(struct folio *, enum ttu_flags flags); int make_device_exclusive_range(struct mm_struct *mm, unsigned long start, diff --git a/mm/migrate.c b/mm/migrate.c index f9ed7a2b8720..c8b0e5203e9a 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1090,7 +1090,8 @@ static void migrate_folio_undo_dst(struct folio *dst, bool locked, /* Cleanup src folio upon migration success */ static void migrate_folio_done(struct folio *src, - enum migrate_reason reason) + enum migrate_reason reason, + unsigned short int ugen) { /* * Compaction can migrate also non-LRU pages which are @@ -1101,8 +1102,12 @@ static void migrate_folio_done(struct folio *src, mod_node_page_state(folio_pgdat(src), NR_ISOLATED_ANON + folio_is_file_lru(src), -folio_nr_pages(src)); - if (reason != MR_MEMORY_FAILURE) - /* We release the page in page_handle_poison. */ + /* We release the page in page_handle_poison. */ + if (reason == MR_MEMORY_FAILURE) + check_luf_flush(ugen); + else if (ugen) + folio_put_ugen(src, ugen); + else folio_put(src); } @@ -1110,7 +1115,8 @@ static void migrate_folio_done(struct folio *src, static int migrate_folio_unmap(new_folio_t get_new_folio, free_folio_t put_new_folio, unsigned long private, struct folio *src, struct folio **dstp, enum migrate_mode mode, - enum migrate_reason reason, struct list_head *ret) + enum migrate_reason reason, struct list_head *ret, + bool *can_luf) { struct folio *dst; int rc = -EAGAIN; @@ -1126,7 +1132,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, folio_clear_unevictable(src); /* free_pages_prepare() will clear PG_isolated. */ list_del(&src->lru); - migrate_folio_done(src, reason); + migrate_folio_done(src, reason, 0); return MIGRATEPAGE_SUCCESS; } @@ -1244,7 +1250,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, /* Establish migration ptes */ VM_BUG_ON_FOLIO(folio_test_anon(src) && !folio_test_ksm(src) && !anon_vma, src); - try_to_migrate(src, mode == MIGRATE_ASYNC ? TTU_BATCH_FLUSH : 0); + *can_luf = try_to_migrate(src, mode == MIGRATE_ASYNC ? TTU_BATCH_FLUSH : 0); old_page_state |= PAGE_WAS_MAPPED; } @@ -1272,7 +1278,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private, struct folio *src, struct folio *dst, enum migrate_mode mode, enum migrate_reason reason, - struct list_head *ret) + struct list_head *ret, unsigned short int ugen) { int rc; int old_page_state = 0; @@ -1326,7 +1332,7 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private, if (anon_vma) put_anon_vma(anon_vma); folio_unlock(src); - migrate_folio_done(src, reason); + migrate_folio_done(src, reason, ugen); return rc; out: @@ -1616,7 +1622,7 @@ static void migrate_folios_move(struct list_head *src_folios, struct list_head *ret_folios, struct migrate_pages_stats *stats, int *retry, int *thp_retry, int *nr_failed, - int *nr_retry_pages) + int *nr_retry_pages, unsigned short int ugen) { struct folio *folio, *folio2, *dst, *dst2; bool is_thp; @@ -1633,7 +1639,7 @@ static void migrate_folios_move(struct list_head *src_folios, rc = migrate_folio_move(put_new_folio, private, folio, dst, mode, - reason, ret_folios); + reason, ret_folios, ugen); /* * The rules are: * Success: folio will be freed @@ -1710,7 +1716,11 @@ static int migrate_pages_batch(struct list_head *from, int rc, rc_saved = 0, nr_pages; LIST_HEAD(unmap_folios); LIST_HEAD(dst_folios); + LIST_HEAD(unmap_folios_luf); + LIST_HEAD(dst_folios_luf); bool nosplit = (reason == MR_NUMA_MISPLACED); + unsigned short int ugen; + bool can_luf; VM_WARN_ON_ONCE(mode != MIGRATE_ASYNC && !list_empty(from) && !list_is_singular(from)); @@ -1773,9 +1783,11 @@ static int migrate_pages_batch(struct list_head *from, continue; } + can_luf = false; rc = migrate_folio_unmap(get_new_folio, put_new_folio, private, folio, &dst, mode, reason, - ret_folios); + ret_folios, &can_luf); + /* * The rules are: * Success: folio will be freed @@ -1821,7 +1833,8 @@ static int migrate_pages_batch(struct list_head *from, /* nr_failed isn't updated for not used */ stats->nr_thp_failed += thp_retry; rc_saved = rc; - if (list_empty(&unmap_folios)) + if (list_empty(&unmap_folios) && + list_empty(&unmap_folios_luf)) goto out; else goto move; @@ -1835,8 +1848,13 @@ static int migrate_pages_batch(struct list_head *from, stats->nr_thp_succeeded += is_thp; break; case MIGRATEPAGE_UNMAP: - list_move_tail(&folio->lru, &unmap_folios); - list_add_tail(&dst->lru, &dst_folios); + if (can_luf) { + list_move_tail(&folio->lru, &unmap_folios_luf); + list_add_tail(&dst->lru, &dst_folios_luf); + } else { + list_move_tail(&folio->lru, &unmap_folios); + list_add_tail(&dst->lru, &dst_folios); + } break; default: /* @@ -1856,6 +1874,8 @@ static int migrate_pages_batch(struct list_head *from, stats->nr_thp_failed += thp_retry; stats->nr_failed_pages += nr_retry_pages; move: + /* Should be before try_to_unmap_flush() */ + ugen = try_to_unmap_luf(); /* Flush TLBs for all unmapped folios */ try_to_unmap_flush(); @@ -1869,7 +1889,11 @@ static int migrate_pages_batch(struct list_head *from, migrate_folios_move(&unmap_folios, &dst_folios, put_new_folio, private, mode, reason, ret_folios, stats, &retry, &thp_retry, - &nr_failed, &nr_retry_pages); + &nr_failed, &nr_retry_pages, 0); + migrate_folios_move(&unmap_folios_luf, &dst_folios_luf, + put_new_folio, private, mode, reason, + ret_folios, stats, &retry, &thp_retry, + &nr_failed, &nr_retry_pages, ugen); } nr_failed += retry; stats->nr_thp_failed += thp_retry; @@ -1880,6 +1904,8 @@ static int migrate_pages_batch(struct list_head *from, /* Cleanup remaining folios */ migrate_folios_undo(&unmap_folios, &dst_folios, put_new_folio, private, ret_folios); + migrate_folios_undo(&unmap_folios_luf, &dst_folios_luf, + put_new_folio, private, ret_folios); return rc; } diff --git a/mm/rmap.c b/mm/rmap.c index e42783c02114..d25ae20a47b5 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -2600,8 +2600,9 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, * * Tries to remove all the page table entries which are mapping this folio and * replace them with special swap entries. Caller must hold the folio lock. + * Return true if all the mappings are read-only, otherwise false. */ -void try_to_migrate(struct folio *folio, enum ttu_flags flags) +bool try_to_migrate(struct folio *folio, enum ttu_flags flags) { struct rmap_walk_control rwc = { .rmap_one = try_to_migrate_one, @@ -2620,11 +2621,11 @@ void try_to_migrate(struct folio *folio, enum ttu_flags flags) */ if (WARN_ON_ONCE(flags & ~(TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD | TTU_SYNC | TTU_BATCH_FLUSH))) - return; + return false; if (folio_is_zone_device(folio) && (!folio_is_device_private(folio) && !folio_is_device_coherent(folio))) - return; + return false; /* * During exec, a temporary VMA is setup and later moved. @@ -2649,6 +2650,8 @@ void try_to_migrate(struct folio *folio, enum ttu_flags flags) fold_ubc(tlb_ubc_luf, tlb_ubc_ro); else fold_ubc(tlb_ubc, tlb_ubc_ro); + + return can_luf; } #ifdef CONFIG_DEVICE_PRIVATE From patchwork Fri May 10 06:52:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13660944 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68A57C25B10 for ; Fri, 10 May 2024 06:52:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E98A86B00A1; Fri, 10 May 2024 02:52:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E48446B00A2; Fri, 10 May 2024 02:52:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D39856B00A4; Fri, 10 May 2024 02:52:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id AA2616B00A1 for ; Fri, 10 May 2024 02:52:30 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 68B43403F8 for ; Fri, 10 May 2024 06:52:30 +0000 (UTC) X-FDA: 82101567660.05.2BB5D0D Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by imf02.hostedemail.com (Postfix) with ESMTP id 978B78000A for ; Fri, 10 May 2024 06:52:28 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=none; spf=pass (imf02.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1715323949; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references; bh=AbBZEoi3YvdH4MCXIQk6zH0Bh463GhyHOD1MRB89vaI=; b=s8bHiTj++ltJRs7E6xYejg50+J2pvHQjF36i0SpFTSkAIGPdak5vM4SB3oCRMC+L/kOqjh 04X9/xg/L7cNRTFoxaXn+RcfEvk6BGRg1edQm98Mj50WuAZPzbpOZ1BKojLm0m0JgPwtjC VGRHyB9o1dQpDNXuXxEP0aMrgDVnzNs= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1715323949; a=rsa-sha256; cv=none; b=5Vjq3Qxoj2BvGZhUc8TRx3MEUrrlQ3b9Qe69VYBFeWqjg1nHpMnGZO27AbLyzCNFXPYlp6 hoKFvDAAotBJ62qO3xmt5CHbqoJTnBM2dubEuQTyUgFcBMezjQUDEy+8BdBIflhIFhBpaD +RvQgEuZesy3PftQtZ1a1cEqlyvW75s= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=none; spf=pass (imf02.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com; dmarc=none X-AuditID: a67dfc5b-d6dff70000001748-fb-663dc4212e01 From: Byungchul Park To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: kernel_team@skhynix.com, akpm@linux-foundation.org, ying.huang@intel.com, vernhao@tencent.com, mgorman@techsingularity.net, hughd@google.com, willy@infradead.org, david@redhat.com, peterz@infradead.org, luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, rjgolo@gmail.com Subject: [PATCH v10 12/12] mm, vmscan: apply luf mechanism to unmapping during folio reclaim Date: Fri, 10 May 2024 15:52:06 +0900 Message-Id: <20240510065206.76078-13-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240510065206.76078-1-byungchul@sk.com> References: <20240510065206.76078-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrKLMWRmVeSWpSXmKPExsXC9ZZnoa7iEds0gz/dVhZz1q9hs/i84R+b xYsN7YwWX9f/YrZ4+qmPxeLyrjlsFvfW/Ge1OL9rLavFjqX7mCwuHVjAZHG89wCTxfx7n9ks Nm+aymxxfMpURovfP4CKT86azOIg4PG9tY/FY+esu+weCzaVemxeoeWxeM9LJo9NqzrZPDZ9 msTu8e7cOXaPEzN+s3jMOxno8X7fVTaPrb/sPBqnXmPz+LxJLoAvissmJTUnsyy1SN8ugStj 7u9NTAX/1SqeLf3K1MD4Vb6LkZNDQsBE4vnS2eww9sLdK1lBbDYBdYkbN34yg9giAmYSB1v/ gNUwC9xlkjjQz9bFyMEhLBAjcWO3A0iYRUBV4n/XTbBWXqDy/dPnsECMlJdYveEA2BhOoPiH ZTOYQGwhAVOJjgcTGbsYuYBqPrNJzJ+2ngmiQVLi4IobLBMYeRcwMqxiFMrMK8tNzMwx0cuo zMus0EvOz93ECAz8ZbV/oncwfroQfIhRgINRiYd3x2abNCHWxLLiytxDjBIczEoivFU11mlC vCmJlVWpRfnxRaU5qcWHGKU5WJTEeY2+lacICaQnlqRmp6YWpBbBZJk4OKUaGHNYt97wlz75 cObhK/+vfDzAx962eZ19e3V/RyKD9l6mu/+L5da5Rmeytjt5VBmv0w09Y5HnNr0k0Log9P2u mGn23WZPNLVe5PSKJrw6ctZq0cw/eq2c1j+XLIszkgqISTxlpOU1MUpopcfSBbJacnMEM5Jn dqaeS+pXSfv8nv/51kUe9n8NlFiKMxINtZiLihMBg1oHsXgCAAA= X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrDLMWRmVeSWpSXmKPExsXC5WfdrKt4xDbNYMo/PYs569ewWXze8I/N 4sWGdkaLr+t/MVs8/dTHYnF47klWi8u75rBZ3Fvzn9Xi/K61rBY7lu5jsrh0YAGTxfHeA0wW 8+99ZrPYvGkqs8XxKVMZLX7/ACo+OWsyi4Ogx/fWPhaPnbPusnss2FTqsXmFlsfiPS+ZPDat 6mTz2PRpErvHu3Pn2D1OzPjN4jHvZKDH+31X2TwWv/jA5LH1l51H49RrbB6fN8kF8Edx2aSk 5mSWpRbp2yVwZcz9vYmp4L9axbOlX5kaGL/KdzFyckgImEgs3L2SFcRmE1CXuHHjJzOILSJg JnGw9Q87iM0scJdJ4kA/WxcjB4ewQIzEjd0OIGEWAVWJ/103wVp5gcr3T5/DAjFSXmL1hgNg YziB4h+WzWACsYUETCU6HkxknMDItYCRYRWjSGZeWW5iZo6pXnF2RmVeZoVecn7uJkZgGC+r /TNxB+OXy+6HGAU4GJV4eHdstkkTYk0sK67MPcQowcGsJMJbVWOdJsSbklhZlVqUH19UmpNa fIhRmoNFSZzXKzw1QUggPbEkNTs1tSC1CCbLxMEp1cBY/mprpuqtSdfUXY3+uu6Wjvz1+OpH tpWHpm3eO+3KRPn9/YvSLjeu7L0bfzaOp0qSzXb3kz2rnSRUik6nTD6zWcZyyp79mqy7dl4y iJ+o/q+9JzGowfXhToXL32/Iq7w0fl5o7idRpLGK/X1n+Y/iy6ar7v+qLli89/GOhfUHO4RE NdsFTd96K7EUZyQaajEXFScCAKtg/rVfAgAA X-CFilter-Loop: Reflected X-Rspam-User: X-Stat-Signature: ckico7587f84zcy8oempmbon58orztxb X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 978B78000A X-HE-Tag: 1715323948-860267 X-HE-Meta: U2FsdGVkX1+nzn75wvd5eMMEdmrZUZWCaU1EiYb5nU7dSLbmXPPBbDA6ZYNUqkScxwfOHbzBovsuYD3pzQsIIoFMtqepSkhTdv7NSjbbRt/wX0zkdK2oIlba5EBIRzFrzx0Nnma82dMUh8RRbhA1Mj5+JKe7Yyj84OjomXi2Z+6Rfo5mc57+qClIHCoZEkIqDdocrhlnPHN9zznXA6KMb6iW6Dfj51rNGqqD/fB8SwigB+JQbnCzsUURiR2phb05atoqJpeD1INR7SrQkp36UxB5Pfe3cX6KghZw9SoOKxQxAtsuPzLtxLiGHCSPoFKFntqDmM7exakbT0Tqt/2ajx/hk1bSq4V1UzY1ny04wED4JVimr2j+fPaSiSkauN+YAel0I7n7gNVutBa1upoHfdYAMNPsghPGEKfvGzT7NdA754s9rjYkoNcYWomH+ONARNVHiAzMgTE6HKZKGzxPpkCneIBffH3Tz6df+WUZ9Vl3/z6piZQ++KZ8WEpIemM9HcyJ1w3g7BussKW3fxxmdiiSM5looYgBuzLWJaRQHWVh7q3WaQnVlIb8IjfvjD8ZG7OTkcSUNuVaD8/0UtBH2waOu7eppWcTna8rqdamndFnzzK9AH3qyZu0z4AesO96u2GTfIQbORvFEQiTM5Z1iGdN0P1s+Ti4nep0643OJLM1I5jqEgMV5voHAGRCfN/LMIOWESnNK/ymnpIRE9n69O3QQTnmUvSA6CGSo7TVR1rrH/jS4fiJl0HNszyAhzmIe8AyxXHJgoBi7EgRX6vx5BX2Xy2XvQMVH88yHKK4cTxYm/YwDbBsWyvy4ZmH6iz/NJVQ71WylkAaF7YYhMG5idNBQktOSoQwSeXZP3/Qt31LJMxKZ/5gK233AVHgNiKW7fQvHLSK5fyv1Q0N13gOchVCZL6u+9l42OkogFD4HFrvS25feiEbySd6GRCJMRjEVT5BFKtZkdYgShOE3/L v7Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: A new mechanism, LUF(Lazy Unmap Flush), defers tlb flush until folios that have been unmapped and freed, eventually get allocated again. It's safe for folios that had been mapped read only and were unmapped, since the contents of the folios don't change while staying in pcp or buddy so we can still read the data through the stale tlb entries. Applied the mechanism to unmapping during folio reclaim. Signed-off-by: Byungchul Park --- include/linux/rmap.h | 5 +++-- mm/rmap.c | 5 ++++- mm/vmscan.c | 21 ++++++++++++++++++++- 3 files changed, 27 insertions(+), 4 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 1898a2c1c087..9ca752f8de97 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -658,7 +658,7 @@ int folio_referenced(struct folio *, int is_locked, struct mem_cgroup *memcg, unsigned long *vm_flags); bool try_to_migrate(struct folio *folio, enum ttu_flags flags); -void try_to_unmap(struct folio *, enum ttu_flags flags); +bool try_to_unmap(struct folio *, enum ttu_flags flags); int make_device_exclusive_range(struct mm_struct *mm, unsigned long start, unsigned long end, struct page **pages, @@ -777,8 +777,9 @@ static inline int folio_referenced(struct folio *folio, int is_locked, return 0; } -static inline void try_to_unmap(struct folio *folio, enum ttu_flags flags) +static inline bool try_to_unmap(struct folio *folio, enum ttu_flags flags) { + return false; } static inline int folio_mkclean(struct folio *folio) diff --git a/mm/rmap.c b/mm/rmap.c index d25ae20a47b5..571e337af448 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -2237,10 +2237,11 @@ static int folio_not_mapped(struct folio *folio) * Tries to remove all the page table entries which are mapping this * folio. It is the caller's responsibility to check if the folio is * still mapped if needed (use TTU_SYNC to prevent accounting races). + * Return true if all the mappings are read-only, otherwise false. * * Context: Caller must hold the folio lock. */ -void try_to_unmap(struct folio *folio, enum ttu_flags flags) +bool try_to_unmap(struct folio *folio, enum ttu_flags flags) { struct rmap_walk_control rwc = { .rmap_one = try_to_unmap_one, @@ -2265,6 +2266,8 @@ void try_to_unmap(struct folio *folio, enum ttu_flags flags) fold_ubc(tlb_ubc_luf, tlb_ubc_ro); else fold_ubc(tlb_ubc, tlb_ubc_ro); + + return can_luf; } /* diff --git a/mm/vmscan.c b/mm/vmscan.c index bb0ff11f9ec9..4e2e9d07cd96 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1031,14 +1031,17 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, struct reclaim_stat *stat, bool ignore_references) { struct folio_batch free_folios; + struct folio_batch free_folios_luf; LIST_HEAD(ret_folios); LIST_HEAD(demote_folios); unsigned int nr_reclaimed = 0; unsigned int pgactivate = 0; bool do_demote_pass; struct swap_iocb *plug = NULL; + unsigned short int ugen; folio_batch_init(&free_folios); + folio_batch_init(&free_folios_luf); memset(stat, 0, sizeof(*stat)); cond_resched(); do_demote_pass = can_demote(pgdat->node_id, sc); @@ -1050,6 +1053,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, enum folio_references references = FOLIOREF_RECLAIM; bool dirty, writeback; unsigned int nr_pages; + bool can_luf = false; cond_resched(); @@ -1292,7 +1296,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, if (folio_test_large(folio) && list_empty(&folio->_deferred_list)) flags |= TTU_SYNC; - try_to_unmap(folio, flags); + can_luf = try_to_unmap(folio, flags); if (folio_mapped(folio)) { stat->nr_unmap_fail += nr_pages; if (!was_swapbacked && @@ -1457,6 +1461,18 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, if (folio_test_large(folio) && folio_test_large_rmappable(folio)) folio_undo_large_rmappable(folio); + + if (can_luf) { + if (folio_batch_add(&free_folios_luf, folio) == 0) { + mem_cgroup_uncharge_folios(&free_folios_luf); + ugen = try_to_unmap_luf(); + if (!ugen) + try_to_unmap_flush(); + free_unref_folios(&free_folios_luf, ugen); + } + continue; + } + if (folio_batch_add(&free_folios, folio) == 0) { mem_cgroup_uncharge_folios(&free_folios); try_to_unmap_flush(); @@ -1526,8 +1542,11 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, pgactivate = stat->nr_activate[0] + stat->nr_activate[1]; mem_cgroup_uncharge_folios(&free_folios); + mem_cgroup_uncharge_folios(&free_folios_luf); + ugen = try_to_unmap_luf(); try_to_unmap_flush(); free_unref_folios(&free_folios, 0); + free_unref_folios(&free_folios_luf, ugen); list_splice(&ret_folios, folio_list); count_vm_events(PGACTIVATE, pgactivate);