From patchwork Wed Sep 21 06:06:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 12983243 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA6D4C32771 for ; Wed, 21 Sep 2022 06:07:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 18A5794000A; Wed, 21 Sep 2022 02:07:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 11279940007; Wed, 21 Sep 2022 02:07:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F1C8E94000A; Wed, 21 Sep 2022 02:07:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id E4A79940007 for ; Wed, 21 Sep 2022 02:07:08 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id B6601C0785 for ; Wed, 21 Sep 2022 06:07:08 +0000 (UTC) X-FDA: 79935059736.03.23305A3 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by imf28.hostedemail.com (Postfix) with ESMTP id 09CE5C000F for ; Wed, 21 Sep 2022 06:07:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1663740427; x=1695276427; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=bnpB1LSVTakor8hls6T/vcEzoakaO2S/bSiuGKHVfgY=; b=lpQySTYSBH9vbYYz5PG1UXuM/+k3qWhBRVrE9r59kVDNLJ8I5/tjPXz+ leH7IrwZWKNtoq07lURdHgBNtsoqUNO6VhwOZzkH4i5/Bh54cV82Uaqye 1sFdnbFHaB2dOpmwPiLffw4Tcix7NyDjiqFLdCwoxlL5O3CoO9BvVTTvw OY6q0UwLRkUSCBHAtkkkWDHeHdq9Y+7aBgQd9zP0yVoRPzWWBX23M2ymS sldslQiyieK9RchzTRJVlkr+K9NexgrE44UxbLfeNyHUlf9QKbpTZ8cMx lQ+kv2nrNH0CQV7FZ3514i1oxidNuzhoJOq7hHFuTANY62l9Dm0/aYljQ w==; X-IronPort-AV: E=McAfee;i="6500,9779,10476"; a="282956862" X-IronPort-AV: E=Sophos;i="5.93,332,1654585200"; d="scan'208";a="282956862" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Sep 2022 23:07:06 -0700 X-IronPort-AV: E=Sophos;i="5.93,332,1654585200"; d="scan'208";a="649913962" Received: from yhuang6-mobl2.sh.intel.com ([10.238.5.245]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Sep 2022 23:07:04 -0700 From: Huang Ying To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Andrew Morton , Huang Ying , Zi Yan , Yang Shi , Baolin Wang , Oscar Salvador , Matthew Wilcox Subject: [RFC 6/6] mm/migrate_pages: batch flushing TLB Date: Wed, 21 Sep 2022 14:06:16 +0800 Message-Id: <20220921060616.73086-7-ying.huang@intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220921060616.73086-1-ying.huang@intel.com> References: <20220921060616.73086-1-ying.huang@intel.com> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1663740427; a=rsa-sha256; cv=none; b=n2BYiYISNqIpE9PAim0i4laxwMxkK+LylHfId5noh9nu57hWTC9ShKxIOFdXn0tvPL3Q9s Ehcjkjto37LebqnGt6Bget+6PEPKr76TKWSlktkzoIDrbkW729WbY95HMZhK7UyAu3XYRT J1n0Bdq7hbnDkJcCxq0QXGbJDbappME= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=lpQySTYS; spf=pass (imf28.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.126 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1663740427; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ICRY6/18j9Dgg+RWFmo+0WTSvBE9U28jYAttP4wCyD4=; b=v/1AB2Dn7Ecgk9djGcQ+EG/csdnUSn+X5POmd8RC12yKqgeZ0jfqfDhgRYNn7NX5WR9DGh oyprG9W+tVeIEbACaZWSM/qkCbNjhsk9inlsaB1yTqo/XoSwFEw368whkgBFf4djBL0bEn vwCpeQBaq8bPalJ/4VSQbka3CbECk1s= X-Rspamd-Server: rspam06 X-Rspam-User: X-Stat-Signature: fyrxtxedq9g5xyioujwc1pdsx11ua8zg X-Rspamd-Queue-Id: 09CE5C000F Authentication-Results: imf28.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=lpQySTYS; spf=pass (imf28.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.126 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com X-HE-Tag: 1663740426-58743 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The TLB flushing will cost quite some CPU cycles during the page migration in some situations. For example, when migrate a page of a process with multiple active threads that run on multiple CPUs. After batching the _unmap and _move in migrate_pages(), the TLB flushing can be batched easily with the existing TLB flush batching mechanism. This patch implements that. We use the following test case to test the patch. On a 2-socket Intel server, - Run pmbench memory accessing benchmark - Run `migratepages` to migrate pages of pmbench between node 0 and node 1 back and forth. With the patch, the TLB flushing IPI reduces 99.1% during the test and the number of pages migrated successfully per second increases 291.7%. Signed-off-by: "Huang, Ying" Cc: Zi Yan Cc: Yang Shi Cc: Baolin Wang Cc: Oscar Salvador Cc: Matthew Wilcox --- mm/migrate.c | 4 +++- mm/rmap.c | 24 ++++++++++++++++++++---- 2 files changed, 23 insertions(+), 5 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index 042fa147f302..a0de0d9b4d41 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1179,7 +1179,7 @@ static int migrate_page_unmap(new_page_t get_new_page, free_page_t put_new_page, /* Establish migration ptes */ VM_BUG_ON_PAGE(PageAnon(page) && !PageKsm(page) && !anon_vma, page); - try_to_migrate(folio, 0); + try_to_migrate(folio, TTU_BATCH_FLUSH); page_was_mapped = 1; } @@ -1647,6 +1647,8 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, nr_thp_failed += thp_retry; nr_failed_pages += nr_retry_pages; move: + try_to_unmap_flush(); + retry = 1; thp_retry = 1; for (pass = 0; pass < 10 && (retry || thp_retry); pass++) { diff --git a/mm/rmap.c b/mm/rmap.c index 93d5a6f793d2..ab88136720dc 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1960,8 +1960,24 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); } else { flush_cache_page(vma, address, pte_pfn(*pvmw.pte)); - /* Nuke the page table entry. */ - pteval = ptep_clear_flush(vma, address, pvmw.pte); + /* + * Nuke the page table entry. + */ + if (should_defer_flush(mm, flags)) { + /* + * We clear the PTE but do not flush so potentially + * a remote CPU could still be writing to the folio. + * If the entry was previously clean then the + * architecture must guarantee that a clear->dirty + * transition on a cached TLB entry is written through + * and traps if the PTE is unmapped. + */ + pteval = ptep_get_and_clear(mm, address, pvmw.pte); + + set_tlb_ubc_flush_pending(mm, pte_dirty(pteval)); + } else { + pteval = ptep_clear_flush(vma, address, pvmw.pte); + } } /* Set the dirty flag on the folio now the pte is gone. */ @@ -2128,10 +2144,10 @@ void try_to_migrate(struct folio *folio, enum ttu_flags flags) /* * Migration always ignores mlock and only supports TTU_RMAP_LOCKED and - * TTU_SPLIT_HUGE_PMD and TTU_SYNC flags. + * TTU_SPLIT_HUGE_PMD, TTU_SYNC and TTU_BATCH_FLUSH flags. */ if (WARN_ON_ONCE(flags & ~(TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD | - TTU_SYNC))) + TTU_SYNC | TTU_BATCH_FLUSH))) return; if (folio_is_zone_device(folio) &&