From patchwork Fri Apr 8 07:12:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 12806180 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD691C433EF for ; Fri, 8 Apr 2022 07:12:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6232B6B0072; Fri, 8 Apr 2022 03:12:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5D3AF8D0001; Fri, 8 Apr 2022 03:12:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 473B86B0075; Fri, 8 Apr 2022 03:12:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 391876B0072 for ; Fri, 8 Apr 2022 03:12:52 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 0F89E253BC for ; Fri, 8 Apr 2022 07:12:52 +0000 (UTC) X-FDA: 79332844584.13.A72F480 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by imf24.hostedemail.com (Postfix) with ESMTP id 1499A180003 for ; Fri, 8 Apr 2022 07:12:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649401971; x=1680937971; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Ye9d5R1Me4H2vAcy2Zr3BTZzg7IEDBUnA0JS3/+5cAI=; b=AsLH5ZVgOqDptAgRnsUSwiJcgk8mfANtAzYj3C5hqLcRQhSpAkCCuXgm XoTbgcqTUq6dwDDHpX0N3wbrjJJq/IMzOv03/Srhxo5qHgMF7wqwhItlj cNqyfxaup1zhBxr7Rm7uhyj9wLM+eEBupZOHBCuze/mSRtHeYuk+Rbcua dCIoTghNw3WdKGZsTMlj8fJjpX6So51rGOYQSqBjcVM7dATWoXvCxSo5X m8mz0VQvkxkZuLiUK/Mwv0wgugJZTUWtTJ3h7lGDjVhXeWIG4lbeOBNoA O3czRCGovQiOsqfWaNGb+NMRLp5x42qzs0mzGjs4kP2h4GTQSuFjPaK15 A==; X-IronPort-AV: E=McAfee;i="6400,9594,10310"; a="259129782" X-IronPort-AV: E=Sophos;i="5.90,244,1643702400"; d="scan'208";a="259129782" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Apr 2022 00:12:50 -0700 X-IronPort-AV: E=Sophos;i="5.90,244,1643702400"; d="scan'208";a="550403949" Received: from fangyaxu-mobl.ccr.corp.intel.com (HELO yhuang6-mobl1.ccr.corp.intel.com) ([10.254.214.217]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Apr 2022 00:12:46 -0700 From: Huang Ying To: Peter Zijlstra , Mel Gorman , Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , Michal Hocko , Rik van Riel , Dave Hansen , Yang Shi , Zi Yan , Wei Xu , osalvador , Shakeel Butt , Zhong Jiang Subject: [PATCH 1/3] memory tiering: hot page selection with hint page fault latency Date: Fri, 8 Apr 2022 15:12:20 +0800 Message-Id: <20220408071222.219689-2-ying.huang@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220408071222.219689-1-ying.huang@intel.com> References: <20220408071222.219689-1-ying.huang@intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 1499A180003 X-Stat-Signature: 44eon8cp9yynknna87b8k9agjp7wfma8 Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=AsLH5ZVg; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf24.hostedemail.com: domain of ying.huang@intel.com has no SPF policy when checking 192.55.52.93) smtp.mailfrom=ying.huang@intel.com X-Rspam-User: X-HE-Tag: 1649401970-16136 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: To optimize page placement in a memory tiering system with NUMA balancing, the hot pages in the slow memory node need to be identified. Essentially, the original NUMA balancing implementation selects the mostly recently accessed (MRU) pages as the hot pages. But this isn't a very good algorithm to identify the hot pages. So, in this patch we implemented a better hot page selection algorithm. Which is based on NUMA balancing page table scanning and hint page fault as follows, - When the page tables of the processes are scanned to change PTE/PMD to be PROT_NONE, the current time is recorded in struct page as scan time. - When the page is accessed, hint page fault will occur. The scan time is gotten from the struct page. And The hint page fault latency is defined as hint page fault time - scan time The shorter the hint page fault latency of a page is, the higher the probability of their access frequency to be higher. So the hint page fault latency is a good estimation of the page hot/cold. But it's hard to find some extra space in struct page to hold the scan time. Fortunately, we can reuse some bits used by the original NUMA balancing. NUMA balancing uses some bits in struct page to store the page accessing CPU and PID (referring to page_cpupid_xchg_last()). Which is used by the multi-stage node selection algorithm to avoid to migrate pages shared accessed by the NUMA nodes back and forth. But for pages in the slow memory node, even if they are shared accessed by multiple NUMA nodes, as long as the pages are hot, they need to be promoted to the fast memory node. So the accessing CPU and PID information are unnecessary for the slow memory pages. We can reuse these bits in struct page to record the scan time for them. For the fast memory pages, these bits are used as before. For the hot threshold, the default value is 1 second, which works well in our performance test. All pages with hint page fault latency < the threshold will be considered hot. A debugfs interface is also provided to adjust the hot threshold. The downside of the above method is that the response time to the workload hot spot changing may be much longer. For example, - A previous cold memory area becomes hot - The hint page fault will be triggered. But the hint page fault latency isn't shorter than the hot threshold. So the pages will not be promoted. - When the memory area is scanned again, maybe after a scan period, the hint page fault latency measured will be shorter than the hot threshold and the pages will be promoted. To mitigate this, - If there are enough free space in the fast memory node, the hot threshold will not be used, all pages will be promoted upon the hint page fault for fast response. - If fast response is more important for system performance, the administrator can set a higher hot threshold. Thanks Zhong Jiang reported and tested the fix for a bug when disabling memory tiering mode dynamically. Signed-off-by: "Huang, Ying" Cc: Andrew Morton Cc: Michal Hocko Cc: Rik van Riel Cc: Mel Gorman Cc: Peter Zijlstra Cc: Dave Hansen Cc: Yang Shi Cc: Zi Yan Cc: Wei Xu Cc: osalvador Cc: Shakeel Butt Cc: Zhong Jiang Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org --- include/linux/mm.h | 30 ++++++++++++++++++ kernel/sched/debug.c | 1 + kernel/sched/fair.c | 74 ++++++++++++++++++++++++++++++++++++++++++++ kernel/sched/sched.h | 1 + mm/huge_memory.c | 13 ++++++-- mm/memory.c | 11 ++++++- mm/migrate.c | 12 +++++++ mm/mprotect.c | 8 ++++- 8 files changed, 145 insertions(+), 5 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index e34edb775334..455a3d0e699d 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1311,6 +1311,18 @@ static inline int folio_nid(const struct folio *folio) } #ifdef CONFIG_NUMA_BALANCING +/* page access time bits needs to hold at least 4 seconds */ +#define PAGE_ACCESS_TIME_MIN_BITS 12 +#if LAST_CPUPID_SHIFT < PAGE_ACCESS_TIME_MIN_BITS +#define PAGE_ACCESS_TIME_BUCKETS \ + (PAGE_ACCESS_TIME_MIN_BITS - LAST_CPUPID_SHIFT) +#else +#define PAGE_ACCESS_TIME_BUCKETS 0 +#endif + +#define PAGE_ACCESS_TIME_MASK \ + (LAST_CPUPID_MASK << PAGE_ACCESS_TIME_BUCKETS) + static inline int cpu_pid_to_cpupid(int cpu, int pid) { return ((cpu & LAST__CPU_MASK) << LAST__PID_SHIFT) | (pid & LAST__PID_MASK); @@ -1346,6 +1358,11 @@ static inline bool __cpupid_match_pid(pid_t task_pid, int cpupid) return (task_pid & LAST__PID_MASK) == cpupid_to_pid(cpupid); } +static inline bool check_cpupid(int cpupid) +{ + return cpupid_to_cpu(cpupid) < nr_cpu_ids; +} + #define cpupid_match_pid(task, cpupid) __cpupid_match_pid(task->pid, cpupid) #ifdef LAST_CPUPID_NOT_IN_PAGE_FLAGS static inline int page_cpupid_xchg_last(struct page *page, int cpupid) @@ -1374,12 +1391,25 @@ static inline void page_cpupid_reset_last(struct page *page) page->flags |= LAST_CPUPID_MASK << LAST_CPUPID_PGSHIFT; } #endif /* LAST_CPUPID_NOT_IN_PAGE_FLAGS */ + +static inline int xchg_page_access_time(struct page *page, int time) +{ + int last_time; + + last_time = page_cpupid_xchg_last(page, time >> PAGE_ACCESS_TIME_BUCKETS); + return last_time << PAGE_ACCESS_TIME_BUCKETS; +} #else /* !CONFIG_NUMA_BALANCING */ static inline int page_cpupid_xchg_last(struct page *page, int cpupid) { return page_to_nid(page); /* XXX */ } +static inline int xchg_page_access_time(struct page *page, int time) +{ + return 0; +} + static inline int page_cpupid_last(struct page *page) { return page_to_nid(page); /* XXX */ diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c index bb3d63bdf4ae..ad63dbfc54f1 100644 --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -333,6 +333,7 @@ static __init int sched_init_debug(void) debugfs_create_u32("scan_period_min_ms", 0644, numa, &sysctl_numa_balancing_scan_period_min); debugfs_create_u32("scan_period_max_ms", 0644, numa, &sysctl_numa_balancing_scan_period_max); debugfs_create_u32("scan_size_mb", 0644, numa, &sysctl_numa_balancing_scan_size); + debugfs_create_u32("hot_threshold_ms", 0644, numa, &sysctl_numa_balancing_hot_threshold); #endif debugfs_create_file("debug", 0444, debugfs_sched, NULL, &sched_debug_fops); diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index d4bd299d67ab..cb130ea46c71 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -1058,6 +1058,9 @@ unsigned int sysctl_numa_balancing_scan_size = 256; /* Scan @scan_size MB every @scan_period after an initial @scan_delay in ms */ unsigned int sysctl_numa_balancing_scan_delay = 1000; +/* The page with hint page fault latency < threshold in ms is considered hot */ +unsigned int sysctl_numa_balancing_hot_threshold = 1000; + struct numa_group { refcount_t refcount; @@ -1400,6 +1403,37 @@ static inline unsigned long group_weight(struct task_struct *p, int nid, return 1000 * faults / total_faults; } +static bool pgdat_free_space_enough(struct pglist_data *pgdat) +{ + int z; + unsigned long enough_mark; + + enough_mark = max(1UL * 1024 * 1024 * 1024 >> PAGE_SHIFT, + pgdat->node_present_pages >> 4); + for (z = pgdat->nr_zones - 1; z >= 0; z--) { + struct zone *zone = pgdat->node_zones + z; + + if (!populated_zone(zone)) + continue; + + if (zone_watermark_ok(zone, 0, + high_wmark_pages(zone) + enough_mark, + ZONE_MOVABLE, 0)) + return true; + } + return false; +} + +static int numa_hint_fault_latency(struct page *page) +{ + int last_time, time; + + time = jiffies_to_msecs(jiffies); + last_time = xchg_page_access_time(page, time); + + return (time - last_time) & PAGE_ACCESS_TIME_MASK; +} + bool should_numa_migrate_memory(struct task_struct *p, struct page * page, int src_nid, int dst_cpu) { @@ -1407,9 +1441,38 @@ bool should_numa_migrate_memory(struct task_struct *p, struct page * page, int dst_nid = cpu_to_node(dst_cpu); int last_cpupid, this_cpupid; + /* + * The pages in slow memory node should be migrated according + * to hot/cold instead of accessing CPU node. + */ + if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING && + !node_is_toptier(src_nid)) { + struct pglist_data *pgdat; + unsigned long latency, th; + + pgdat = NODE_DATA(dst_nid); + if (pgdat_free_space_enough(pgdat)) + return true; + + th = sysctl_numa_balancing_hot_threshold; + latency = numa_hint_fault_latency(page); + if (latency >= th) + return false; + + return true; + } + this_cpupid = cpu_pid_to_cpupid(dst_cpu, current->pid); last_cpupid = page_cpupid_xchg_last(page, this_cpupid); + /* + * The cpupid may be invalid when NUMA_BALANCING_MEMORY_TIERING + * is disabled dynamically. + */ + if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) && + !node_is_toptier(src_nid) && !check_cpupid(last_cpupid)) + return false; + /* * Allow first faults or private faults to migrate immediately early in * the lifetime of a task. The magic number 4 is based on waiting for @@ -2642,6 +2705,17 @@ void task_numa_fault(int last_cpupid, int mem_node, int pages, int flags) if (!p->mm) return; + /* + * NUMA faults statistics are unnecessary for the slow memory node. + * + * And, the cpupid may be invalid when NUMA_BALANCING_MEMORY_TIERING + * is disabled dynamically. + */ + if (!node_is_toptier(mem_node) && + (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING || + !check_cpupid(last_cpupid))) + return; + /* Allocate buffer to track faults on a per-node basis */ if (unlikely(!p->numa_faults)) { int size = sizeof(*p->numa_faults) * diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 58263f90c559..86ce60d3d472 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2408,6 +2408,7 @@ extern unsigned int sysctl_numa_balancing_scan_delay; extern unsigned int sysctl_numa_balancing_scan_period_min; extern unsigned int sysctl_numa_balancing_scan_period_max; extern unsigned int sysctl_numa_balancing_scan_size; +extern unsigned int sysctl_numa_balancing_hot_threshold; #endif #ifdef CONFIG_SCHED_HRTICK diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 2fe38212e07c..3a715eeeebb5 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1401,7 +1401,7 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) struct page *page; unsigned long haddr = vmf->address & HPAGE_PMD_MASK; int page_nid = NUMA_NO_NODE; - int target_nid, last_cpupid = -1; + int target_nid, last_cpupid = (-1 & LAST_CPUPID_MASK); bool migrated = false; bool was_writable = pmd_savedwrite(oldpmd); int flags = 0; @@ -1422,7 +1422,8 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) flags |= TNF_NO_GROUP; page_nid = page_to_nid(page); - last_cpupid = page_cpupid_last(page); + if (node_is_toptier(page_nid)) + last_cpupid = page_cpupid_last(page); target_nid = numa_migrate_prep(page, vma, haddr, page_nid, &flags); @@ -1740,6 +1741,7 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, if (prot_numa) { struct page *page; + bool toptier; /* * Avoid trapping faults against the zero page. The read-only * data is likely to be read-cached on the local CPU and @@ -1752,13 +1754,18 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, goto unlock; page = pmd_page(*pmd); + toptier = node_is_toptier(page_to_nid(page)); /* * Skip scanning top tier node if normal numa * balancing is disabled */ if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) && - node_is_toptier(page_to_nid(page))) + toptier) goto unlock; + + if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING && + !toptier) + xchg_page_access_time(page, jiffies_to_msecs(jiffies)); } /* * In case prot_numa, we are under mmap_read_lock(mm). It's critical diff --git a/mm/memory.c b/mm/memory.c index b370ed118767..a8ac15ce7a75 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -74,6 +74,7 @@ #include #include #include +#include #include @@ -4455,8 +4456,16 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) if (page_mapcount(page) > 1 && (vma->vm_flags & VM_SHARED)) flags |= TNF_SHARED; - last_cpupid = page_cpupid_last(page); page_nid = page_to_nid(page); + /* + * In memory tiering mode, cpupid of slow memory page is used + * to record page access time. So use default value. + */ + if ((sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) && + !node_is_toptier(page_nid)) + last_cpupid = (-1 & LAST_CPUPID_MASK); + else + last_cpupid = page_cpupid_last(page); target_nid = numa_migrate_prep(page, vma, vmf->address, page_nid, &flags); if (target_nid == NUMA_NO_NODE) { diff --git a/mm/migrate.c b/mm/migrate.c index dc84edfae842..e73f26dfeb38 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -534,6 +534,18 @@ void folio_migrate_flags(struct folio *newfolio, struct folio *folio) * future migrations of this same page. */ cpupid = page_cpupid_xchg_last(&folio->page, -1); + /* + * If migrate between slow and fast memory node, reset cpupid, + * because that is used to record page access time in slow + * memory node + */ + if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) { + bool f_toptier = node_is_toptier(page_to_nid(&folio->page)); + bool t_toptier = node_is_toptier(page_to_nid(&newfolio->page)); + + if (f_toptier != t_toptier) + cpupid = -1; + } page_cpupid_xchg_last(&newfolio->page, cpupid); folio_migrate_ksm(newfolio, folio); diff --git a/mm/mprotect.c b/mm/mprotect.c index b69ce7a7b2b7..e7cb90d84fac 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -85,6 +85,7 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd, if (prot_numa) { struct page *page; int nid; + bool toptier; /* Avoid TLB flush if possible */ if (pte_protnone(oldpte)) @@ -114,14 +115,19 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd, nid = page_to_nid(page); if (target_node == nid) continue; + toptier = node_is_toptier(nid); /* * Skip scanning top tier node if normal numa * balancing is disabled */ if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) && - node_is_toptier(nid)) + toptier) continue; + if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING && + !toptier) + xchg_page_access_time(page, + jiffies_to_msecs(jiffies)); } oldpte = ptep_modify_prot_start(vma, addr, pte); From patchwork Fri Apr 8 07:12:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 12806181 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41A5CC433FE for ; Fri, 8 Apr 2022 07:12:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D004B6B0074; Fri, 8 Apr 2022 03:12:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C89CE8D0002; Fri, 8 Apr 2022 03:12:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B52D88D0001; Fri, 8 Apr 2022 03:12:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id A65916B0074 for ; Fri, 8 Apr 2022 03:12:55 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 76427253A9 for ; Fri, 8 Apr 2022 07:12:55 +0000 (UTC) X-FDA: 79332844710.11.6845446 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by imf24.hostedemail.com (Postfix) with ESMTP id A0A23180003 for ; Fri, 8 Apr 2022 07:12:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649401974; x=1680937974; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=D7sjSzuAV7O4iFNvTt1+YNA6K7bEhVt9ulanOUU3C5Q=; b=OrI3/GSgS8Qjksl/KTV0cX8s/wKVDy2kTHrvr79P4Q/83lFhOqvlyxVa 7bQ0aAGgnbabv05j5HIGc1CudEoqsmx3bTY+CmMfucupXEkLTKKsyE5Op BUuZy3GeGPtqnxYW160BysMvFjSnLQwoB5t/7+vJH6g4aAC4M1JVNA09S VMqpDRAQazT9mp3kVadA/YTpuJEX/mRrDBYzumHMDZ9np6b2tZLzruyiF 5RgrTq3VxCXykobQ9K/FoQXOJNgt8RuOxq3EoptawwWZJVzjoetN3KFEr zCU5D5DTfSfW+UcoTUdUwQ1rv6YMzsJzAKfTg+iyeEKuogBHhrwHMqOzb A==; X-IronPort-AV: E=McAfee;i="6400,9594,10310"; a="259129790" X-IronPort-AV: E=Sophos;i="5.90,244,1643702400"; d="scan'208";a="259129790" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Apr 2022 00:12:54 -0700 X-IronPort-AV: E=Sophos;i="5.90,244,1643702400"; d="scan'208";a="550403965" Received: from fangyaxu-mobl.ccr.corp.intel.com (HELO yhuang6-mobl1.ccr.corp.intel.com) ([10.254.214.217]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Apr 2022 00:12:50 -0700 From: Huang Ying To: Peter Zijlstra , Mel Gorman , Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , Michal Hocko , Rik van Riel , Dave Hansen , Yang Shi , Zi Yan , Wei Xu , osalvador , Shakeel Butt Subject: [PATCH 2/3] memory tiering: rate limit NUMA migration throughput Date: Fri, 8 Apr 2022 15:12:21 +0800 Message-Id: <20220408071222.219689-3-ying.huang@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220408071222.219689-1-ying.huang@intel.com> References: <20220408071222.219689-1-ying.huang@intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: A0A23180003 X-Stat-Signature: a6odimctrn9gjyqdsqy6s475fkj4ijf9 Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="OrI3/GSg"; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf24.hostedemail.com: domain of ying.huang@intel.com has no SPF policy when checking 192.55.52.93) smtp.mailfrom=ying.huang@intel.com X-Rspam-User: X-HE-Tag: 1649401974-565054 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In NUMA balancing memory tiering mode, the hot slow memory pages could be promoted to the fast memory node via NUMA balancing. But this incurs some overhead too. So that sometimes the workload performance may be hurt. To avoid too much disturbing to the workload in these situations, we should make it possible to rate limit the promotion throughput. So, in this patch, we implement a simple rate limit algorithm as follows. The number of the candidate pages to be promoted to the fast memory node via NUMA balancing is counted, if the count exceeds the limit specified by the users, the NUMA balancing promotion will be stopped until the next second. A new sysctl knob kernel.numa_balancing_rate_limit_mbps is added for the users to specify the limit. TODO: Add ABI document for new sysctl knob. Signed-off-by: "Huang, Ying" Cc: Andrew Morton Cc: Michal Hocko Cc: Rik van Riel Cc: Mel Gorman Cc: Peter Zijlstra Cc: Dave Hansen Cc: Yang Shi Cc: Zi Yan Cc: Wei Xu Cc: osalvador Cc: Shakeel Butt Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org --- include/linux/mmzone.h | 5 +++++ include/linux/sched/sysctl.h | 1 + kernel/sched/fair.c | 28 ++++++++++++++++++++++++++-- kernel/sysctl.c | 8 ++++++++ mm/vmstat.c | 1 + 5 files changed, 41 insertions(+), 2 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 962b14d403e8..e9b4767619bc 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -224,6 +224,7 @@ enum node_stat_item { #endif #ifdef CONFIG_NUMA_BALANCING PGPROMOTE_SUCCESS, /* promote successfully */ + PGPROMOTE_CANDIDATE, /* candidate pages to promote */ #endif NR_VM_NODE_STAT_ITEMS }; @@ -915,6 +916,10 @@ typedef struct pglist_data { struct deferred_split deferred_split_queue; #endif +#ifdef CONFIG_NUMA_BALANCING + unsigned long numa_ts; + unsigned long numa_nr_candidate; +#endif /* Fields commonly accessed by the page reclaim scanner */ /* diff --git a/include/linux/sched/sysctl.h b/include/linux/sched/sysctl.h index c1076b5e17fb..b25ce13e646a 100644 --- a/include/linux/sched/sysctl.h +++ b/include/linux/sched/sysctl.h @@ -29,6 +29,7 @@ enum sched_tunable_scaling { #ifdef CONFIG_NUMA_BALANCING extern int sysctl_numa_balancing_mode; +extern unsigned int sysctl_numa_balancing_rate_limit; #else #define sysctl_numa_balancing_mode 0 #endif diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index cb130ea46c71..a1e7cbfb2647 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -1060,6 +1060,11 @@ unsigned int sysctl_numa_balancing_scan_delay = 1000; /* The page with hint page fault latency < threshold in ms is considered hot */ unsigned int sysctl_numa_balancing_hot_threshold = 1000; +/* + * Restrict the NUMA migration per second in MB for each target node + * if no enough free space in target node + */ +unsigned int sysctl_numa_balancing_rate_limit = 65536; struct numa_group { refcount_t refcount; @@ -1434,6 +1439,23 @@ static int numa_hint_fault_latency(struct page *page) return (time - last_time) & PAGE_ACCESS_TIME_MASK; } +static bool numa_migration_check_rate_limit(struct pglist_data *pgdat, + unsigned long rate_limit, int nr) +{ + unsigned long nr_candidate; + unsigned long now = jiffies, last_ts; + + mod_node_page_state(pgdat, PGPROMOTE_CANDIDATE, nr); + nr_candidate = node_page_state(pgdat, PGPROMOTE_CANDIDATE); + last_ts = pgdat->numa_ts; + if (now > last_ts + HZ && + cmpxchg(&pgdat->numa_ts, last_ts, now) == last_ts) + pgdat->numa_nr_candidate = nr_candidate; + if (nr_candidate - pgdat->numa_nr_candidate >= rate_limit) + return false; + return true; +} + bool should_numa_migrate_memory(struct task_struct *p, struct page * page, int src_nid, int dst_cpu) { @@ -1448,7 +1470,7 @@ bool should_numa_migrate_memory(struct task_struct *p, struct page * page, if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING && !node_is_toptier(src_nid)) { struct pglist_data *pgdat; - unsigned long latency, th; + unsigned long rate_limit, latency, th; pgdat = NODE_DATA(dst_nid); if (pgdat_free_space_enough(pgdat)) @@ -1459,7 +1481,9 @@ bool should_numa_migrate_memory(struct task_struct *p, struct page * page, if (latency >= th) return false; - return true; + rate_limit = sysctl_numa_balancing_rate_limit << (20 - PAGE_SHIFT); + return numa_migration_check_rate_limit(pgdat, rate_limit, + thp_nr_pages(page)); } this_cpupid = cpu_pid_to_cpupid(dst_cpu, current->pid); diff --git a/kernel/sysctl.c b/kernel/sysctl.c index 830aaf8ca08e..c2de22027e7c 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -1698,6 +1698,14 @@ static struct ctl_table kern_table[] = { .extra1 = SYSCTL_ZERO, .extra2 = SYSCTL_FOUR, }, + { + .procname = "numa_balancing_rate_limit_mbps", + .data = &sysctl_numa_balancing_rate_limit, + .maxlen = sizeof(unsigned int), + .mode = 0644, + .proc_handler = proc_dointvec_minmax, + .extra1 = SYSCTL_ZERO, + }, #endif /* CONFIG_NUMA_BALANCING */ { .procname = "sched_rt_period_us", diff --git a/mm/vmstat.c b/mm/vmstat.c index b75b1a64b54c..bbaf732df925 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1245,6 +1245,7 @@ const char * const vmstat_text[] = { #endif #ifdef CONFIG_NUMA_BALANCING "pgpromote_success", + "pgpromote_candidate", #endif /* enum writeback_stat_item counters */ From patchwork Fri Apr 8 07:12:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 12806182 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1CAEAC433EF for ; Fri, 8 Apr 2022 07:13:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B79396B0075; Fri, 8 Apr 2022 03:12:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B51438D0002; Fri, 8 Apr 2022 03:12:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A188C8D0001; Fri, 8 Apr 2022 03:12:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 93A586B0075 for ; Fri, 8 Apr 2022 03:12:59 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 6A34E94D for ; Fri, 8 Apr 2022 07:12:59 +0000 (UTC) X-FDA: 79332844878.07.73A4D4D Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by imf24.hostedemail.com (Postfix) with ESMTP id 61E3C180002 for ; Fri, 8 Apr 2022 07:12:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649401978; x=1680937978; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=AE7SAJ6AeNcsbMBtxxvdlJ0UXLadL45KpsV0PlrSAtg=; b=DgUR2UI+O/t5eU8U6JrMuTgd5t4FFcR2aKjMB95k2edEeshJbkOjK37r tnyY4UXO2wmjjh2q2KxMqokDLHrJwpgCFvK7TQM3Q/iq8ke3cWD/+aImt kKJbWYZipAJwPVJc6qE3EyNs2AlC+K2B0i5fN66RhA6qJ/O9IYzexe1KH z7JxVsgG1GOHvUwU4Y69goCGsdDCmyiXA16EFvtVpJzHOGDMC4z4cHnae gzOZgsjAnMR8DP3Mq+sCOqSb/zULysp9LeOnbnnl7PI4MB0vEj8ZYb2mJ GrfEhasXUya2FOk/p40P0u1MNZoRuEGTl1T9tnTFgurq2MC6E2YErexNj w==; X-IronPort-AV: E=McAfee;i="6400,9594,10310"; a="259129796" X-IronPort-AV: E=Sophos;i="5.90,244,1643702400"; d="scan'208";a="259129796" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Apr 2022 00:12:57 -0700 X-IronPort-AV: E=Sophos;i="5.90,244,1643702400"; d="scan'208";a="550403980" Received: from fangyaxu-mobl.ccr.corp.intel.com (HELO yhuang6-mobl1.ccr.corp.intel.com) ([10.254.214.217]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Apr 2022 00:12:54 -0700 From: Huang Ying To: Peter Zijlstra , Mel Gorman , Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , Michal Hocko , Rik van Riel , Dave Hansen , Yang Shi , Zi Yan , Wei Xu , osalvador , Shakeel Butt Subject: [PATCH 3/3] memory tiering: adjust hot threshold automatically Date: Fri, 8 Apr 2022 15:12:22 +0800 Message-Id: <20220408071222.219689-4-ying.huang@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220408071222.219689-1-ying.huang@intel.com> References: <20220408071222.219689-1-ying.huang@intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 61E3C180002 X-Stat-Signature: fadu6nnqwb88sthm574guiwdno5exyuc Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=DgUR2UI+; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf24.hostedemail.com: domain of ying.huang@intel.com has no SPF policy when checking 192.55.52.93) smtp.mailfrom=ying.huang@intel.com X-Rspam-User: X-HE-Tag: 1649401978-144930 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The promotion hot threshold may be workload dependent. So in this patch, a method to adjust the hot threshold automatically is implemented. The basic idea is to control the number of the candidate promotion pages to match the promotion rate limit. If the hint page fault latency of a page is less than the hot threshold, we will try to promote the page, and the page is called the candidate promotion page. If the number of the candidate promotion pages in the statistics interval is much more than the promotion rate limit, the hot threshold will be decreased to reduce the number of the candidate promotion pages. Otherwise, the hot threshold will be increased to increase the number of the candidate promotion pages. To make the above method works, in each statistics interval, the total number of the pages to check (on which the hint page faults occur) and the hot/cold distribution need to be stable. Because the page tables are scanned linearly in NUMA balancing, but the hot/cold distribution isn't uniform along the address usually, the statistics interval should be larger than the NUMA balancing scan period. So in the patch, the max scan period is used as statistics interval and it works well in our tests. Signed-off-by: "Huang, Ying" Cc: Andrew Morton Cc: Michal Hocko Cc: Rik van Riel Cc: Mel Gorman Cc: Peter Zijlstra Cc: Dave Hansen Cc: Yang Shi Cc: Zi Yan Cc: Wei Xu Cc: osalvador Cc: Shakeel Butt Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org --- include/linux/mmzone.h | 3 ++ include/linux/sched/sysctl.h | 2 ++ kernel/sched/core.c | 15 +++++++++ kernel/sched/fair.c | 62 +++++++++++++++++++++++++++++++++--- 4 files changed, 78 insertions(+), 4 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index e9b4767619bc..4abf1af20372 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -919,6 +919,9 @@ typedef struct pglist_data { #ifdef CONFIG_NUMA_BALANCING unsigned long numa_ts; unsigned long numa_nr_candidate; + unsigned long numa_threshold_ts; + unsigned long numa_threshold_nr_candidate; + unsigned long numa_threshold; #endif /* Fields commonly accessed by the page reclaim scanner */ diff --git a/include/linux/sched/sysctl.h b/include/linux/sched/sysctl.h index b25ce13e646a..58c1937ba09f 100644 --- a/include/linux/sched/sysctl.h +++ b/include/linux/sched/sysctl.h @@ -67,6 +67,8 @@ int sysctl_sched_uclamp_handler(struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos); int sysctl_numa_balancing(struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos); +int sysctl_numa_balancing_threshold(struct ctl_table *table, int write, void *buffer, + size_t *lenp, loff_t *ppos); int sysctl_schedstats(struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos); diff --git a/kernel/sched/core.c b/kernel/sched/core.c index d575b4914925..179cc4c16e24 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4364,6 +4364,18 @@ void set_numabalancing_state(bool enabled) } #ifdef CONFIG_PROC_SYSCTL +static void reset_memory_tiering(void) +{ + struct pglist_data *pgdat; + + for_each_online_pgdat(pgdat) { + pgdat->numa_threshold = 0; + pgdat->numa_threshold_nr_candidate = + node_page_state(pgdat, PGPROMOTE_CANDIDATE); + pgdat->numa_threshold_ts = jiffies; + } +} + int sysctl_numa_balancing(struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { @@ -4380,6 +4392,9 @@ int sysctl_numa_balancing(struct ctl_table *table, int write, if (err < 0) return err; if (write) { + if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) && + (state & NUMA_BALANCING_MEMORY_TIERING)) + reset_memory_tiering(); sysctl_numa_balancing_mode = state; __set_numabalancing_state(state); } diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index a1e7cbfb2647..a01216d95bd7 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -1456,6 +1456,54 @@ static bool numa_migration_check_rate_limit(struct pglist_data *pgdat, return true; } +int sysctl_numa_balancing_threshold(struct ctl_table *table, int write, void *buffer, + size_t *lenp, loff_t *ppos) +{ + int err; + struct pglist_data *pgdat; + + if (write && !capable(CAP_SYS_ADMIN)) + return -EPERM; + + err = proc_dointvec_minmax(table, write, buffer, lenp, ppos); + if (err < 0 || !write) + return err; + + for_each_online_pgdat(pgdat) + pgdat->numa_threshold = 0; + + return err; +} + +#define NUMA_MIGRATION_ADJUST_STEPS 16 + +static void numa_migration_adjust_threshold(struct pglist_data *pgdat, + unsigned long rate_limit, + unsigned long ref_th) +{ + unsigned long now = jiffies, last_th_ts, th_period; + unsigned long unit_th, th; + unsigned long nr_cand, ref_cand, diff_cand; + + th_period = msecs_to_jiffies(sysctl_numa_balancing_scan_period_max); + last_th_ts = pgdat->numa_threshold_ts; + if (now > last_th_ts + th_period && + cmpxchg(&pgdat->numa_threshold_ts, last_th_ts, now) == last_th_ts) { + ref_cand = rate_limit * + sysctl_numa_balancing_scan_period_max / 1000; + nr_cand = node_page_state(pgdat, PGPROMOTE_CANDIDATE); + diff_cand = nr_cand - pgdat->numa_threshold_nr_candidate; + unit_th = ref_th / NUMA_MIGRATION_ADJUST_STEPS; + th = pgdat->numa_threshold ? : ref_th; + if (diff_cand > ref_cand * 11 / 10) + th = max(th - unit_th, unit_th); + else if (diff_cand < ref_cand * 9 / 10) + th = min(th + unit_th, ref_th); + pgdat->numa_threshold_nr_candidate = nr_cand; + pgdat->numa_threshold = th; + } +} + bool should_numa_migrate_memory(struct task_struct *p, struct page * page, int src_nid, int dst_cpu) { @@ -1470,18 +1518,24 @@ bool should_numa_migrate_memory(struct task_struct *p, struct page * page, if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING && !node_is_toptier(src_nid)) { struct pglist_data *pgdat; - unsigned long rate_limit, latency, th; + unsigned long rate_limit, latency, th, def_th; pgdat = NODE_DATA(dst_nid); - if (pgdat_free_space_enough(pgdat)) + if (pgdat_free_space_enough(pgdat)) { + /* workload changed, reset hot threshold */ + pgdat->numa_threshold = 0; return true; + } + + def_th = sysctl_numa_balancing_hot_threshold; + rate_limit = sysctl_numa_balancing_rate_limit << (20 - PAGE_SHIFT); + numa_migration_adjust_threshold(pgdat, rate_limit, def_th); - th = sysctl_numa_balancing_hot_threshold; + th = pgdat->numa_threshold ? : def_th; latency = numa_hint_fault_latency(page); if (latency >= th) return false; - rate_limit = sysctl_numa_balancing_rate_limit << (20 - PAGE_SHIFT); return numa_migration_check_rate_limit(pgdat, rate_limit, thp_nr_pages(page)); }