From patchwork Mon Feb 6 06:33:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 13129348 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F1637C61DA4 for ; Mon, 6 Feb 2023 06:33:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4509F6B0074; Mon, 6 Feb 2023 01:33:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 400756B0075; Mon, 6 Feb 2023 01:33:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2A2316B0078; Mon, 6 Feb 2023 01:33:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 189B96B0074 for ; Mon, 6 Feb 2023 01:33:34 -0500 (EST) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id DA0B81A0AEA for ; Mon, 6 Feb 2023 06:33:33 +0000 (UTC) X-FDA: 80435900706.28.8F9DCF3 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by imf25.hostedemail.com (Postfix) with ESMTP id 1EBA4A0002 for ; Mon, 6 Feb 2023 06:33:29 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=gWT9BnX+; spf=pass (imf25.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.24 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1675665210; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=eQCGybRKLuo1lXH+q95g+s/s9WCuJaz4/3G88lpb+jY=; b=I99TDGoKHLjOTb/gk2fIxgG8macnIVuzGVghzwJy+s+7HhgPPCPjPyWnCNXfate6ipxONs vV1gm6ewCVCtNKtq2PEva/daw3L093qE11JRxJRrt835Q9VXdLcN8ece5EW69uMSWooNyB +1Go0fS1AWQmot3JRSQyv3xDUnP4r+A= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=gWT9BnX+; spf=pass (imf25.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.24 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1675665210; a=rsa-sha256; cv=none; b=D0JLkQfSEBw8yfTuntC9OWxeosE3vGGIVp/xgwfqFUUL2sKICg27y1Gz/Jo49xtJJ/rzqf zVZngnn3Hs87+6wzITBO/xQvzixooawq5SQJSxL3XM3SJhW+sv0fsi027n05jGpDDGcM4F Oo7UG33BpkSMn6y+qL4QHTMKmdmLJ/4= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675665210; x=1707201210; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=u2qsfcVM3St5WrkIXjYK8yM1e3KwimUpAfokCw818Xk=; b=gWT9BnX+W9WH1RRsilWnZzlHdNOKLyyuQZkRc0cRvEnRNRRMCxZJWM2r XaEfHW3D3/3JsljoP9CCHX8X/oqidkQb+vx75G8T7bIyxweTsVFCUIT8b u26hPoTQiCyqflPl7/uTCWo3XQ422KEGLpqZdfC1teL2fTjVnoZROz/4g vcvL0qnNEGcOMEuFfk0s+37uJmwFD1mocp1mWRNDxgGf8wOTxmO4V2gTi igrlSW9vztHgXa1DHzXm13iVwv9Q1zbhDPP0sGw1G059DI+AzY9CQ7/Tl skIsW7zSiNjeTnA0U9E3rH4PzGArJdAOhT6EL37HO3i+TazJkJ4H1rbsI g==; X-IronPort-AV: E=McAfee;i="6500,9779,10612"; a="330432609" X-IronPort-AV: E=Sophos;i="5.97,276,1669104000"; d="scan'208";a="330432609" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Feb 2023 22:33:28 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10612"; a="659744606" X-IronPort-AV: E=Sophos;i="5.97,276,1669104000"; d="scan'208";a="659744606" Received: from baoyumen-mobl.ccr.corp.intel.com (HELO yhuang6-mobl2.smartont.net) ([10.255.30.227]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Feb 2023 22:33:22 -0800 From: Huang Ying To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Huang, Ying" , Zi Yan , Yang Shi , Baolin Wang , Oscar Salvador , Matthew Wilcox , Bharata B Rao , Alistair Popple , haoxin , Minchan Kim , Mike Kravetz , Hyeonggon Yoo <42.hyeyoo@gmail.com> Subject: [PATCH -v4 0/9] migrate_pages(): batch TLB flushing Date: Mon, 6 Feb 2023 14:33:04 +0800 Message-Id: <20230206063313.635011-1-ying.huang@intel.com> X-Mailer: git-send-email 2.35.1 MIME-Version: 1.0 X-Stat-Signature: 634i9xnkky6j8cn4geo74ynskdwdh9fk X-Rspam-User: X-Rspamd-Queue-Id: 1EBA4A0002 X-Rspamd-Server: rspam06 X-HE-Tag: 1675665209-615975 X-HE-Meta: U2FsdGVkX1+mg/JoAfcSQEKps+uZPRKVv+p3tBg1+C6/Nwd/Y+oqSRwTrGvDi2jEeI87JwOcxUPoGzBBs3QnbPXqLKYdSUtkERel9jLqU9rdItc060WMO20Hb61tAjODlQ/IWjiHKLTLhEXuMUjjbSP5HCJnZF5tg3RYpKyWWV4KKd2GU17uiiDMFqBpptWvvkSnEI8aiH3f2ohJ/FVeEXN07D7TDDT+m5R1GwgiCQ0BvonMnw2JAEWT5eDnXo4bqd05wV5FMw+Z9cX5JJ/Alk7bNhq++g4IxX6FcwwBpuAbhljnQKh947W0lD2iXcuLzbV/XeIfzRRYPnXjzKxVr0Z3WA/a0pifnVcCzBNwDHAIuEFjtKmiQfsBSSinF+HAbKggxfBO0KlFz6VX1Yj3mCfRBfpUadZlM57r0hw6vz1Myc9ISo45j79Ty7ieTpL4hN3fEFBXDIP8hDerwJDD08vheYgcngK4p6QvmTx+7uNV01H4IXAG+5S14sUMvHjbbbj+udFD8gIohaXDnNtSaKaJYOSuXF5WgE5glgiDcA7syTXs5Y7R9DdEORI91A1dI8RCx9U4EZGfYm5ETAXg9x5iNild0soHG12BTkhUmwLl/vSWsabriE+YwdZMWLrVxNcYDQPo93Joo1T6FEvjMpn+mONrcp8gkuIWWXTP0PNC6ScpqMyyU68OtFkoRvyAF92yuVlL69O1reBp9b7cj1oIws21EObBpx6TJFrpj+EcdDKEWs6CKTmmfjSwU7LuFaGuZvUG90AjCWXeOhZYMpQEoM5j43Ct4z/fuSuugN+d9q9qPAC57SeQTo1bqGTIhB9Zrthn+pgjgBrU/xeXifr8taWQBp9LACEfZ/aTfQeINbgO3b8aUeCVZxit1tKR180vODXpm5J63GXkZfJ9sKo0A5zI2zXabatVLQ7b6EeVaXvP9D5qjIUPpckyJnQaHdGYHRw4E0B4YEeY0Vw z+bAM/rB YCxxdENlwgUEK4D6CbjS0+zUJx56WUWesAEA5PpUkAT9hxcgGFMVZfLos0HOdG1plDMN6iiC//z0IZCPeT2iQRp/Mo8BvR0MgDe9w9JsOazLPnTj4Y0HR6jkVE4BLKExEVWRARoEKjTkAXoKxOWosybPzyJegqMDqXH4AknhvSVoXT0Pa+P8IIwRFFU19yGYOP7qzc0ZW17maoa0J98NyrBPygsqT7ogbg8iGQmoJUPp2eEdakTN6cPSlbXR0K1CEBz2Cwkt0xUnsv4e5P/r51Hy0ulxsx1EX35JjKq+wnwdblA60HLcK+orJOCkPLazX0ZCFfyCDcYhFz6tO8uNPgd9B8sXvX/L2IwbccsPFN+GyE2aj3h2xMtTorQ9qEkcZ1FucA3GsxWHJ4bjYWiZXdQtatt9K2LLUpfEX5HPkXJaOpkQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Huang, Ying" Now, migrate_pages() migrate folios one by one, like the fake code as follows, for each folio unmap flush TLB copy restore map If multiple folios are passed to migrate_pages(), there are opportunities to batch the TLB flushing and copying. That is, we can change the code to something as follows, for each folio unmap for each folio flush TLB for each folio copy for each folio restore map The total number of TLB flushing IPI can be reduced considerably. And we may use some hardware accelerator such as DSA to accelerate the folio copying. So in this patch, we refactor the migrate_pages() implementation and implement the TLB flushing batching. Base on this, hardware accelerated folio copying can be implemented. If too many folios are passed to migrate_pages(), in the naive batched implementation, we may unmap too many folios at the same time. The possibility for a task to wait for the migrated folios to be mapped again increases. So the latency may be hurt. To deal with this issue, the max number of folios be unmapped in batch is restricted to no more than HPAGE_PMD_NR in the unit of page. That is, the influence is at the same level of THP migration. We use the following test to measure the performance impact of the patchset, On a 2-socket Intel server, - Run pmbench memory accessing benchmark - Run `migratepages` to migrate pages of pmbench between node 0 and node 1 back and forth. With the patch, the TLB flushing IPI reduces 99.1% during the test and the number of pages migrated successfully per second increases 291.7%. This patchset is based on v6.2-rc4. Changes: v4: - Fixed another bug about non-LRU folio migration. Thanks Hyeonggon! v3: - Rebased on v6.2-rc4 - Fixed a bug about non-LRU folio migration. Thanks Mike! - Fixed some comments. Thanks Baolin! - Collected reviewed-by. v2: - Rebased on v6.2-rc3 - Fixed type force cast warning. Thanks Kees! - Added more comments and cleaned up the code. Thanks Andrew, Zi, Alistair, Dan! - Collected reviewed-by. from rfc to v1: - Rebased on v6.2-rc1 - Fix the deadlock issue caused by locking multiple pages synchronously per Alistair's comments. Thanks! - Fix the autonumabench panic per Rao's comments and fix. Thanks! - Other minor fixes per comments. Thanks! Best Regards, Huang, Ying