From patchwork Tue Sep 26 06:09:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 13398718 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA30DE7D0C5 for ; Tue, 26 Sep 2023 06:10:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 41FD48D006E; Tue, 26 Sep 2023 02:10:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3CEDF8D0005; Tue, 26 Sep 2023 02:10:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 26F5D8D006E; Tue, 26 Sep 2023 02:10:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 00CE08D0005 for ; Tue, 26 Sep 2023 02:10:10 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id B9D8EB42EA for ; Tue, 26 Sep 2023 06:10:10 +0000 (UTC) X-FDA: 81277723380.22.E1240A6 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.100]) by imf30.hostedemail.com (Postfix) with ESMTP id C73D18000D for ; Tue, 26 Sep 2023 06:10:08 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Nqy4gKBp; spf=pass (imf30.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.100 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1695708609; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=w2iMvxo7If1GTMOOh15H1XxbFei4TMeFoSfe4bBcBqM=; b=U8ONmuGaY5JG+PnJDhEepYeyWZse49cp2Dh0VTgv0CVJSpof92i4OEVSjlrSbcoyW5Mo/a qo3onqafS4vksao9HgwMUJhs/ve5GcQv+BMdvK/AT8vMtXIYojPah+y5TEake2V4OTP244 FkeQEhzwsuxALDxDo6fBcMGuBXT3XSQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1695708609; a=rsa-sha256; cv=none; b=LbypIY9FBah8B3RVptEaXeGDHijE8tRYpSRaIf5uf1+tVhJDPkODPt21F6EI+5nSNt6SzC uLwNkpF3/oXCJdW31skO9joykYL/mjxYLk9c0JYl9J8uXN3uiyKhNq1RU6eaw5lA1/5Sm7 AWyOH129xIMnKLxNrnEyXzdKXtPLEdU= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Nqy4gKBp; spf=pass (imf30.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.100 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695708608; x=1727244608; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=qaIFP424oa1dLN+3UGtNGktIElxaTAH6Kr61bKGSDDg=; b=Nqy4gKBpknGaWtzSbCYw6xiX/MuYMMNvaDXzqfOqzdBSIe+BsCvp0JKx 33IY90SpWPtn4p+S/Jt5ka8FVdSvMuNYirefWNOVfk92v5COBaN7jjCC+ RaWCGW9XkSxE1Hn7Rzz3Ccws8PHb1TL8pHCQEdsj52qW4JNnGwnllgXS7 wrvOu2L8MiT+t9IKSSiQ0nFN1lGKQ4KNbZcTAbEMUHdBPw6WRHZcyuESb z87b4FosyL7d63pF9CZTxhhvi5BGg2bsN1qJkvg+sBxYoCtgQANoS2o92 1aLNeonSSOcovd0tWJUnsa7ewtxt1B9h1QDOwVfmLuCTj474/j+oAVrtg g==; X-IronPort-AV: E=McAfee;i="6600,9927,10843"; a="447991481" X-IronPort-AV: E=Sophos;i="6.03,177,1694761200"; d="scan'208";a="447991481" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2023 23:10:07 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10843"; a="892076115" X-IronPort-AV: E=Sophos;i="6.03,177,1694761200"; d="scan'208";a="892076115" Received: from aozhu-mobl.ccr.corp.intel.com (HELO yhuang6-mobl2.ccr.corp.intel.com) ([10.255.31.94]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2023 23:08:59 -0700 From: Huang Ying To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Arjan Van De Ven , Huang Ying , Mel Gorman , Vlastimil Babka , David Hildenbrand , Johannes Weiner , Dave Hansen , Michal Hocko , Pavel Tatashin , Matthew Wilcox , Christoph Lameter Subject: [PATCH -V2 09/10] mm, pcp: avoid to reduce PCP high unnecessarily Date: Tue, 26 Sep 2023 14:09:10 +0800 Message-Id: <20230926060911.266511-10-ying.huang@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230926060911.266511-1-ying.huang@intel.com> References: <20230926060911.266511-1-ying.huang@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: C73D18000D X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: b7p1opzrrsjegbdrdw5qxee3qcuat8ff X-HE-Tag: 1695708608-182276 X-HE-Meta: U2FsdGVkX1/Fl/cNE09fYPckYYo6AcrxRKOcvIPYLPeVFMSUqxgC5D5FKoJ11NPQKdhPfeJrEepMHvk6unCvWkD0ZfwxzlqpT2DnsZX0lmNwRoRwWazM2gSOiLhaHpiNDRaLUGTtcGAPrkx2SfovN5ECRzFbGBTEIIjk25sgGgTdEy4knPyOZjcPpXciPagVWYJXHJbh08YoYhXeNZE1ymY+4ZMEAgjPcUHx9ldYc0MGn7EA/x/Moul1OYy9wzck8jtDPFmtAD6y1LIuG65VJiX/1q1xpeWnKP1TZtD30215J8t1nDm42La3SeFhwr52tb7EziYhPBp9I/uaIKdP4A0r4nQjY1q/uLFx3CvF+YqmjfFmqWeIXn5VLDuBiSUVBZoESqm910zIFkAvKMVejhg7KmQzNIOS+7lzx2kbTSXikSk9hmyAlyUQ8i+dB8Ln8vV6KBBuxByWjTKHFZPo3VkoIzfMhgNMVn8bvXmuT4fW/ZigqXlN6/+/STbaqosNGkA8hnbC9NcGTzfXxGAMMGgGQpQdWpuJOZA+ltUPccpSM1bSZX2HM84GmlXtOP22sKZADRbhHktPt2LAYn0DPvvHzdV8EZSUoAgTMoLbYKAFC93hmN6+b+dml6Uz9lA4GKJ0nQRNNodm+Zuze7A8nHdfuZNoFQ8qdwh1o+Co2FDnAhkxC6IDLWs9HAH94QV+r7n1oZskMCfupqMvnyqMCXTtlIaJV+kP3/Bg5IYwMnNrj9AhS27Y/rYFp1PckqPbGSW2MMvbZU5uGkQEr6o6vECNXSxbgjUaMhpAlPWxKqli8j2LDefO8fxYE/o4u4kEU4ndWMivhX+cE+MlZVwIDLrD2xrb1XCRmZw7Do4GBnJlRP8Xrpb6BMYQX+UYUuO6BZKxIf9MrHs4XuU7ZBs94qNuTvVwB15Eqwq07NBZio80syYmrU3lh6kKM5VyAm8/+njFuxHrExkKXpxKFgN Mo9xYHKs t59l0jiudP5dRw5R6gOCQMy7CWbk5LwL1d58St3hVv+vzUEeIuqiTpGn2yt0u6m7uZWVwHL3wYJ/c94z0tFwL8Z4h8eejeteyOAUzw+ygcVcSxM5wddeEP0BMDc5cbb36tBAVb6R740mFTxw4+XpW2b1lXDRjUDa0rPNkY/tOaGJ2zTimLyoJZObXR3UlQf3+GMnHI1c4XyJOVKXDpDu8djDEi4TYsTRxxy7XQKyrFk/X7baGVQkgZSL2LTIreAdi/6uqG8RV6ygpnH5m4GNXmsWK8mQxuEi5yZRUWoyxWaVZgf6J2HLiztcb7FfThBUf/+1XZ5qbS1SkhY+7Y8dtqbkfgiXCnJeGchV+TQ2XG/A5cwfssqCPulY6rxlKNRRMGhWm+J2nJbBxW4Hk1k6RE+T6gmpuJWXxpAvHVO1XuWYwUauFIf35C5q9DDzsHi+r+ByUJlHYRM86y4DDzNjomQhMbJUGZE2rxq6Qawr9V6VX+JJoLxLTmBbNhqHG16x9TxMKcH2hO7tds6iwF6rxxBpbvXHHOsKTicRv X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In PCP high auto-tuning algorithm, to minimize idle pages in PCP, in periodic vmstat updating kworker (via refresh_cpu_vm_stats()), we will decrease PCP high to try to free possible idle PCP pages. One issue is that even if the page allocating/freeing depth is larger than maximal PCP high, we may reduce PCP high unnecessarily. To avoid the above issue, in this patch, we will track the minimal PCP page count. And, the periodic PCP high decrement will not more than the recent minimal PCP page count. So, only detected idle pages will be freed. On a 2-socket Intel server with 224 logical CPU, we run 8 kbuild instances in parallel (each with `make -j 28`) in 8 cgroup. This simulates the kbuild server that is used by 0-Day kbuild service. With the patch, The number of pages allocated from zone (instead of from PCP) decreases 21.4%. Signed-off-by: "Huang, Ying" Cc: Andrew Morton Cc: Mel Gorman Cc: Vlastimil Babka Cc: David Hildenbrand Cc: Johannes Weiner Cc: Dave Hansen Cc: Michal Hocko Cc: Pavel Tatashin Cc: Matthew Wilcox Cc: Christoph Lameter --- include/linux/mmzone.h | 1 + mm/page_alloc.c | 15 ++++++++++----- 2 files changed, 11 insertions(+), 5 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 8a19e2af89df..35b78c7522a7 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -682,6 +682,7 @@ enum zone_watermarks { struct per_cpu_pages { spinlock_t lock; /* Protects lists field */ int count; /* number of pages in the list */ + int count_min; /* minimal number of pages in the list recently */ int high; /* high watermark, emptying needed */ int high_min; /* min high watermark */ int high_max; /* max high watermark */ diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 08b74c65b88a..d7b602822ab3 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2166,19 +2166,20 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order, */ int decay_pcp_high(struct zone *zone, struct per_cpu_pages *pcp) { - int high_min, to_drain, batch; + int high_min, decrease, to_drain, batch; int todo = 0; high_min = READ_ONCE(pcp->high_min); batch = READ_ONCE(pcp->batch); /* - * Decrease pcp->high periodically to try to free possible - * idle PCP pages. And, avoid to free too many pages to - * control latency. + * Decrease pcp->high periodically to free idle PCP pages counted + * via pcp->count_min. And, avoid to free too many pages to + * control latency. This caps pcp->high decrement too. */ if (pcp->high > high_min) { + decrease = min(pcp->count_min, pcp->high / 5); pcp->high = max3(pcp->count - (batch << PCP_BATCH_SCALE_MAX), - pcp->high * 4 / 5, high_min); + pcp->high - decrease, high_min); if (pcp->high > high_min) todo++; } @@ -2191,6 +2192,8 @@ int decay_pcp_high(struct zone *zone, struct per_cpu_pages *pcp) todo++; } + pcp->count_min = pcp->count; + return todo; } @@ -2828,6 +2831,8 @@ struct page *__rmqueue_pcplist(struct zone *zone, unsigned int order, page = list_first_entry(list, struct page, pcp_list); list_del(&page->pcp_list); pcp->count -= 1 << order; + if (pcp->count < pcp->count_min) + pcp->count_min = pcp->count; } while (check_new_pages(page, order)); return page;