From patchwork Tue Jan 10 07:53:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 13094783 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0EBFFC61DB3 for ; Tue, 10 Jan 2023 07:54:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9A9608E000B; Tue, 10 Jan 2023 02:54:02 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9324F8E0001; Tue, 10 Jan 2023 02:54:02 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7ACA98E000B; Tue, 10 Jan 2023 02:54:02 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 602058E0001 for ; Tue, 10 Jan 2023 02:54:02 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 14E4AAEDB3 for ; Tue, 10 Jan 2023 07:54:02 +0000 (UTC) X-FDA: 80338125924.18.9D67B47 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by imf22.hostedemail.com (Postfix) with ESMTP id 195FCC0016 for ; Tue, 10 Jan 2023 07:53:59 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=WqPn9YDC; spf=pass (imf22.hostedemail.com: domain of ying.huang@intel.com designates 192.55.52.151 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673337240; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=3zVlQ/eFw4o90gDmtfcrBAcKY3HCYoo+98Jb/ucefC8=; b=RywOBrPvKV9Zd2VOkO+tnnuT1dtusP9qfp2KLoCdxHdFrudse5jQMCMUmcgsTPZzw1DrhJ ZIV+Wde7z++r9UWWHSuxPI/RvRl9XuCdi9s00eddzgTjX/ntMRQcqGfiSKEqB9V6htv6bx +5lQWMxfh+N3TvIacwdxMBVFe7Nowvc= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=WqPn9YDC; spf=pass (imf22.hostedemail.com: domain of ying.huang@intel.com designates 192.55.52.151 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673337240; a=rsa-sha256; cv=none; b=ViFVn5p8q1KcSziYRQIPyoOn27ZXRXxq0Gl4b8I06OtWnw2xy7RNUUal/HdfGCUzvVc9LO eF00g4fFbVcUUOVhOA8hIDXzqduoP9Dawjyaq8ZzXIe6P38/aCFV0Z8qc9T4dtpX4OTZwy jU6jpPXL1gHu8ivxrrblT5niYxgZJbk= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673337240; x=1704873240; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+JXFxO9igzjBIvznA/wHPFCJpglE8nncKGoF1baKSuw=; b=WqPn9YDC+dd6mUMiidwjIunRCgPOAdhVdpnSlUzMBipBKA8pg4elfl8K /r4WbLlYj0HuXeXqKdr+HegC+KIisZA7rqkgDCEs6+d6u/xi64+cXCsiD qm3Ou4BjzDGKK65QlYypwlUnJnvK58IfWSPQEZNRfMyAoeu2/1kaPD2AI mIs9tOC1CDJdMPSPaXdSr3qSHOyEQFelNd7qzGSiET3WUlLpgSjZrWaRV dQqkIncEYxOFeCejpDe8Xf/QVmeDEXjM/Svo4+SATTajPJIOkDomWVM9F fJFRqveRBBIfgQGAEe8vF6G2DA1oCsTp8l8VtKhdeZpN/RFlfk2q4IzHj Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10585"; a="303449378" X-IronPort-AV: E=Sophos;i="5.96,314,1665471600"; d="scan'208";a="303449378" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jan 2023 23:53:59 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10585"; a="902287180" X-IronPort-AV: E=Sophos;i="5.96,314,1665471600"; d="scan'208";a="902287180" Received: from juxinli-mobl.ccr.corp.intel.com (HELO yhuang6-mobl2.ccr.corp.intel.com) ([10.254.214.35]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jan 2023 23:53:56 -0800 From: Huang Ying To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , Zi Yan , Yang Shi , Baolin Wang , Oscar Salvador , Matthew Wilcox , Bharata B Rao , Alistair Popple , haoxin Subject: [PATCH -v2 8/9] migrate_pages: batch flushing TLB Date: Tue, 10 Jan 2023 15:53:26 +0800 Message-Id: <20230110075327.590514-9-ying.huang@intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20230110075327.590514-1-ying.huang@intel.com> References: <20230110075327.590514-1-ying.huang@intel.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 195FCC0016 X-Stat-Signature: pme96bbamsoc9onazy1bizeu753ai5cp X-HE-Tag: 1673337239-926191 X-HE-Meta: U2FsdGVkX18AA+FGoTV5NF5Kv6cC/mTK1cQycxd0CXzeOZZKWUOa8FCBF4cebM779yQVTLKibTH1XEJ0cW5O6N0iBW2aE1fiNxNOzYB0ba+TaoEoJcBB966q/LJGC2LCKMzqRAIwFiNYgrgZfBqJADAX23dWg4TkOI74/UslEWMRVwtBbLYwaUUkdFhphGBlOZW955jShLLAYIHVJkWjwGXDIXxsQAT33VWYPtEnU8+EyOmO/s+/DgedI8PSvm+Dq3Ock6clWok7WTKRJjk7zDsx9I8TBCNTOxnPkPJ3X2E2JgNXsvG2urTR4pgvOIAph09sz3mpBPdKtJFsmNI1LFwr7jlwRqox8DOXoTPkhLDnVTX64KFsfr9C02eZQDlBK9hF4CFql3u87AuV4eh7hnpfnt085ylvcsmAl2l43hawE3ywEjIXPuFEOHkzILoW74mPMcJiPZ0l+XDmLtrx9jMPdQNumjhnso3Cr1Fe2qHROj4zNgL5a6OE9cuHwocsQar10/vIW4Qve9D8gK06lEx3oeXgThfWFwDOM0koSfIqDlc65Bzy7zvxMZTJsrTKPhvmDmS3PB5YEwsb4NLic1/TwfA1wUvWVjXhyD1XvLeVeaZ6c9gjdbmLSu2OuHx2BK0DI4E+xul2EShW1vzJKxqxTE3bXAZRzdoKQ6LI5PyjpWL6IHVBl9yBf7kK53aOsZBJ0F609wGupfccP4EBlClx4d2dGfwnkcXETcIwz3eX6+Z9xwaQpwQ4kqZRdaresmtW9cVK//JjGnBCUGKlc+E5xU1BCiGVr5uzaTupU3hvv96RjEuTxlohhG0HkjKOwE+9XVVIab8mL6WhdxbIMfixfUNFLfCXRHDb/FTOLUxff9ngbSNUIfAk3xWgjbLsV3oKF5ierslnIx8aEoHwtWzwfRXk+gq3zY5srtxpp6iwMsFNmCR1xrRQfwH9TW90GUqjhIEOMTs= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The TLB flushing will cost quite some CPU cycles during the folio migration in some situations. For example, when migrate a folio of a process with multiple active threads that run on multiple CPUs. After batching the _unmap and _move in migrate_pages(), the TLB flushing can be batched easily with the existing TLB flush batching mechanism. This patch implements that. We use the following test case to test the patch. On a 2-socket Intel server, - Run pmbench memory accessing benchmark - Run `migratepages` to migrate pages of pmbench between node 0 and node 1 back and forth. With the patch, the TLB flushing IPI reduces 99.1% during the test and the number of pages migrated successfully per second increases 291.7%. NOTE: TLB flushing is batched only for normal folios, not for THP folios. Because the overhead of TLB flushing for THP folios is much lower than that for normal folios (about 1/512 on x86 platform). Signed-off-by: "Huang, Ying" Cc: Zi Yan Cc: Yang Shi Cc: Baolin Wang Cc: Oscar Salvador Cc: Matthew Wilcox Cc: Bharata B Rao Cc: Alistair Popple Cc: haoxin --- mm/migrate.c | 4 +++- mm/rmap.c | 20 +++++++++++++++++--- 2 files changed, 20 insertions(+), 4 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index 6c721b897efd..6adaea05b80a 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1230,7 +1230,7 @@ static int migrate_folio_unmap(new_page_t get_new_page, free_page_t put_new_page /* Establish migration ptes */ VM_BUG_ON_FOLIO(folio_test_anon(src) && !folio_test_ksm(src) && !anon_vma, src); - try_to_migrate(src, 0); + try_to_migrate(src, TTU_BATCH_FLUSH); page_was_mapped = 1; } @@ -1773,6 +1773,8 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, stats->nr_thp_failed += thp_retry; stats->nr_failed_pages += nr_retry_pages; move: + try_to_unmap_flush(); + retry = 1; for (pass = 0; pass < NR_MAX_MIGRATE_PAGES_RETRY && (retry || large_retry); diff --git a/mm/rmap.c b/mm/rmap.c index b616870a09be..2e125f3e462e 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1976,7 +1976,21 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, } else { flush_cache_page(vma, address, pte_pfn(*pvmw.pte)); /* Nuke the page table entry. */ - pteval = ptep_clear_flush(vma, address, pvmw.pte); + if (should_defer_flush(mm, flags)) { + /* + * We clear the PTE but do not flush so potentially + * a remote CPU could still be writing to the folio. + * If the entry was previously clean then the + * architecture must guarantee that a clear->dirty + * transition on a cached TLB entry is written through + * and traps if the PTE is unmapped. + */ + pteval = ptep_get_and_clear(mm, address, pvmw.pte); + + set_tlb_ubc_flush_pending(mm, pte_dirty(pteval)); + } else { + pteval = ptep_clear_flush(vma, address, pvmw.pte); + } } /* Set the dirty flag on the folio now the pte is gone. */ @@ -2148,10 +2162,10 @@ void try_to_migrate(struct folio *folio, enum ttu_flags flags) /* * Migration always ignores mlock and only supports TTU_RMAP_LOCKED and - * TTU_SPLIT_HUGE_PMD and TTU_SYNC flags. + * TTU_SPLIT_HUGE_PMD, TTU_SYNC, and TTU_BATCH_FLUSH flags. */ if (WARN_ON_ONCE(flags & ~(TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD | - TTU_SYNC))) + TTU_SYNC | TTU_BATCH_FLUSH))) return; if (folio_is_zone_device(folio) &&