From patchwork Tue Aug 25 00:23:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 11734503 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2D83A913 for ; Tue, 25 Aug 2020 00:25:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EB2692074D for ; Tue, 25 Aug 2020 00:25:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EB2692074D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1974A6B002D; Mon, 24 Aug 2020 20:25:00 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 171A46B002E; Mon, 24 Aug 2020 20:25:00 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 05EC56B0030; Mon, 24 Aug 2020 20:25:00 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id E507A6B002D for ; Mon, 24 Aug 2020 20:24:59 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id B3C1E8248047 for ; Tue, 25 Aug 2020 00:24:59 +0000 (UTC) X-FDA: 77187195918.19.head54_000adca27057 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id 87FA41AD1B0 for ; Tue, 25 Aug 2020 00:24:59 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,ying.huang@intel.com,,RULES_HIT:30003:30034:30054:30055:30064:30070,0,RBL:134.134.136.20:@intel.com:.lbl8.mailshell.net-62.50.0.100 64.95.201.95;04ygjh1swsr17e6b87gkn9db54z5iycjpis79fa9n7sfmtkd9w89zqe8etcwdkg.pb1w9qgdnc6e8xryq5ffbfjtrzhwssis65weikzrjityutct7ojoprb9t1mg8is.e-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: head54_000adca27057 X-Filterd-Recvd-Size: 7115 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf07.hostedemail.com (Postfix) with ESMTP for ; Tue, 25 Aug 2020 00:24:57 +0000 (UTC) IronPort-SDR: CgxHyJ66ni9TGbr/mQ1b/mW+13caNYkJBhdfh3jUTZm+PvR0OX57EkerNQuM8eOiLDnOa9j0nK FRTS5OqYo0OQ== X-IronPort-AV: E=McAfee;i="6000,8403,9723"; a="143794343" X-IronPort-AV: E=Sophos;i="5.76,350,1592895600"; d="scan'208";a="143794343" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Aug 2020 17:24:57 -0700 IronPort-SDR: 0wHd9jcj0oOaFhTM15AmwLXJdHYFPsuA84jIFzhCk498Gqbx3syi55XeYRjAQBcO+qCElxAXLV VpRc9oJF3GSA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.76,350,1592895600"; d="scan'208";a="443428094" Received: from huiyao-mobl2.ccr.corp.intel.com (HELO yhuang-mobile.ccr.corp.intel.com) ([10.254.214.197]) by orsmga004.jf.intel.com with ESMTP; 24 Aug 2020 17:24:54 -0700 From: Huang Ying To: Peter Zijlstra Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , Andrew Morton , Michal Hocko , Rik van Riel , Mel Gorman , Ingo Molnar , Dave Hansen , Dan Williams Subject: [RFC -V3 5/5] autonuma, memory tiering: Adjust hot threshold automatically Date: Tue, 25 Aug 2020 08:23:54 +0800 Message-Id: <20200825002354.17038-6-ying.huang@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200825002354.17038-1-ying.huang@intel.com> References: <20200825002354.17038-1-ying.huang@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 87FA41AD1B0 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: It isn't easy for the administrator to determine the hot threshold. So in this patch, a method to adjust the hot threshold automatically is implemented. The basic idea is to control the number of the candidate promotion pages to match the promotion rate limit. If the hint page fault latency of a page is less than the hot threshold, we will try to promote the page, and the page is called the candidate promotion page. If the number of the candidate promotion pages in the statistics interval is much more than the promotion rate limit, the hot threshold will be decreased to reduce the number of the candidate promotion pages. Otherwise, the hot threshold will be increased to increase the number of the candidate promotion pages. To make the above method works, in each statistics interval, the total number of the pages to check (on which the hint page faults occur) and the hot/cold distribution need to be stable. Because the page tables are scanned linearly in AutoNUMA, but the hot/cold distribution isn't uniform along the address, the statistics interval should be larger than the AutoNUMA scan period. So in the patch, the max scan period is used as statistics interval and it works well in our tests. The sysctl knob kernel.numa_balancing_hot_threshold_ms becomes the initial value and max value of the hot threshold. The patch improves the score of pmbench memory accessing benchmark with 80:20 read/write ratio and normal access address distribution by 3% with 30% fewer NUMA page migrations on a 2 socket Intel server with Optance DC Persistent Memory. Because it improves the accuracy of the hot page selection. Signed-off-by: "Huang, Ying" Cc: Andrew Morton Cc: Michal Hocko Cc: Rik van Riel Cc: Mel Gorman Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Dave Hansen Cc: Dan Williams Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org --- include/linux/mmzone.h | 3 +++ kernel/sched/fair.c | 40 ++++++++++++++++++++++++++++++++++++---- 2 files changed, 39 insertions(+), 4 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 6e1e138cf61c..f7a7f0c374d5 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -752,6 +752,9 @@ typedef struct pglist_data { #ifdef CONFIG_NUMA_BALANCING unsigned long numa_ts; unsigned long numa_nr_candidate; + unsigned long numa_threshold_ts; + unsigned long numa_threshold_nr_candidate; + unsigned long numa_threshold; #endif /* Fields commonly accessed by the page reclaim scanner */ diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 7835485e4b8a..110e3c847a29 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -1472,6 +1472,35 @@ static bool numa_migration_check_rate_limit(struct pglist_data *pgdat, return true; } +#define NUMA_MIGRATION_ADJUST_STEPS 16 + +static void numa_migration_adjust_threshold(struct pglist_data *pgdat, + unsigned long rate_limit, + unsigned long ref_th) +{ + unsigned long now = jiffies, last_th_ts, th_period; + unsigned long unit_th, th; + unsigned long nr_cand, ref_cand, diff_cand; + + th_period = msecs_to_jiffies(sysctl_numa_balancing_scan_period_max); + last_th_ts = pgdat->numa_threshold_ts; + if (now > last_th_ts + th_period && + cmpxchg(&pgdat->numa_threshold_ts, last_th_ts, now) == last_th_ts) { + ref_cand = rate_limit * + sysctl_numa_balancing_scan_period_max / 1000; + nr_cand = node_page_state(pgdat, NUMA_NR_CANDIDATE); + diff_cand = nr_cand - pgdat->numa_threshold_nr_candidate; + unit_th = ref_th / NUMA_MIGRATION_ADJUST_STEPS; + th = pgdat->numa_threshold ? : ref_th; + if (diff_cand > ref_cand * 11 / 10) + th = max(th - unit_th, unit_th); + else if (diff_cand < ref_cand * 9 / 10) + th = min(th + unit_th, ref_th); + pgdat->numa_threshold_nr_candidate = nr_cand; + pgdat->numa_threshold = th; + } +} + bool should_numa_migrate_memory(struct task_struct *p, struct page * page, int src_nid, int dst_cpu) { @@ -1486,19 +1515,22 @@ bool should_numa_migrate_memory(struct task_struct *p, struct page * page, if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING && !node_is_toptier(src_nid)) { struct pglist_data *pgdat; - unsigned long rate_limit, latency, th; + unsigned long rate_limit, latency, th, def_th; pgdat = NODE_DATA(dst_nid); if (pgdat_free_space_enough(pgdat)) return true; - th = sysctl_numa_balancing_hot_threshold; + def_th = sysctl_numa_balancing_hot_threshold; + rate_limit = + sysctl_numa_balancing_rate_limit << (20 - PAGE_SHIFT); + numa_migration_adjust_threshold(pgdat, rate_limit, def_th); + + th = pgdat->numa_threshold ? : def_th; latency = numa_hint_fault_latency(page); if (latency > th) return false; - rate_limit = - sysctl_numa_balancing_rate_limit << (20 - PAGE_SHIFT); return numa_migration_check_rate_limit(pgdat, rate_limit, hpage_nr_pages(page)); }