From patchwork Wed Sep 20 06:18:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 13392117 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BDA30CE79AC for ; Wed, 20 Sep 2023 06:20:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 58A926B011D; Wed, 20 Sep 2023 02:20:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 53B1C6B011E; Wed, 20 Sep 2023 02:20:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3DB7C6B011F; Wed, 20 Sep 2023 02:20:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 28C3F6B011D for ; Wed, 20 Sep 2023 02:20:16 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id F1F6C16046F for ; Wed, 20 Sep 2023 06:20:15 +0000 (UTC) X-FDA: 81255975990.26.F6F6F26 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.126]) by imf10.hostedemail.com (Postfix) with ESMTP id EBA1BC000F for ; Wed, 20 Sep 2023 06:20:13 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=FzsQNyjA; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf10.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.126 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1695190814; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=UvpAserKTFQazQgvaQV8AWmN5BTD454Je6S/+Aq412A=; b=d3sRAgxMFE8gdQS2wOuQcncrn+h9+HfaRwZ5mdXiUEo1RH8C6VH84zKidyEVjPoTlRqA3v onP0oEjfARt9SWyxBpMT6ulEyfDFa4ZfgalWRkZYhytY6nL8q4NAJG3arusm5m1mjBWvDs 32+QqHPEHFmbM6G+I0NdaTyujSnWxp4= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=FzsQNyjA; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf10.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.126 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1695190814; a=rsa-sha256; cv=none; b=Frf0G8bKVpA4+UsCwoBIZQumdpqf5rvTaB2drUbwrrN8zf7TEFnTA7e7Q4RTuIJini7raU LWI/tIzQCMiVVU58f8laTEGqMat5BtIssej6BwxrsYhs9sg9U6vK4wcJYiJDYbPPkbBOd2 UBoVsAp9kZHObOj8hNkXyfNj36xxD/U= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695190814; x=1726726814; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=S/60qEX1jhBiS6ZWzLs5VHrBM2EfBeFasIo8hPl4DFY=; b=FzsQNyjAOaI+jy5/HT011eA468r+Tbw0llCUqFrClo+7bL3wtqSG7JcX ef/JgQ2uYRqjiIeZXB64Iwww1pfbVcQBiT7VyUIiYSIprTimbUSbue7Ll LDYFDMYJCWE0hsgbcJkY9pCNd4iJNqUYWT5PwLXspGRBubSz1U6vjU633 /vtdg05Y8xjQ5eFEaSjJDYbee3MXh5w5IjqJCIK84B2qD4zhYtS5LISgq lnPtpBCfcqcX4QGJGUDopHwIHlDShhzKD/2VqsSZE+5TTrLzEfnaC55LQ ZYvZtZJjiI8lnvF2uwHkLXGmYcBscJ9XpVGc95NkEqwgnJVQq+OfkPImI w==; X-IronPort-AV: E=McAfee;i="6600,9927,10838"; a="365187810" X-IronPort-AV: E=Sophos;i="6.02,161,1688454000"; d="scan'208";a="365187810" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Sep 2023 23:20:12 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10838"; a="740060689" X-IronPort-AV: E=Sophos;i="6.02,161,1688454000"; d="scan'208";a="740060689" Received: from yhuang6-mobl2.sh.intel.com ([10.238.6.133]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Sep 2023 23:20:08 -0700 From: Huang Ying To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Arjan Van De Ven , Huang Ying , Andrew Morton , Mel Gorman , Vlastimil Babka , David Hildenbrand , Johannes Weiner , Dave Hansen , Michal Hocko , Pavel Tatashin , Matthew Wilcox , Christoph Lameter Subject: [PATCH 10/10] mm, pcp: reduce detecting time of consecutive high order page freeing Date: Wed, 20 Sep 2023 14:18:56 +0800 Message-Id: <20230920061856.257597-11-ying.huang@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230920061856.257597-1-ying.huang@intel.com> References: <20230920061856.257597-1-ying.huang@intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: EBA1BC000F X-Stat-Signature: 8s5zkkdwbc9gacarwfgghsxt6uopphqk X-Rspam-User: X-HE-Tag: 1695190813-404085 X-HE-Meta: U2FsdGVkX1/Hr3lM4YdsB9reBPje4sJS1TgYOmutX+R6Iat+OZyss9UVAuxyzucAsKIn8OLxmkMJkjYAGT2LGUJTgjIOMl7XO4VPJRw/+1ydOY5QfWDM5UhZ81UAJtaWmDJY0Z+zrzS08Zj3hH9RNpHIDdG+OtAqT1VjMTiuu9HmD3VbfmtvFSbpQoai3WbQ+e9XeClVOP1yglC5jnTUT3rmCo3xGtJbCJAAXW9DdsnXALaaWkDgewrcpHJhE9VMvqBjg5EZqkXLh1HVbuRRjqLPGnSGJvldE/16qP0CDI4+0+YbJUNegU11HpzOVCPXOaESoOD5Ei2iP6d5KE5rnMNc4Znh2dlSRdpzuMuSHjlNgY5EYeukdlHuRFqxlib/DYH7xP4LIX73FJ+DoqRf7nW12YmP69B1qN1879t2fOacWVi+dIGRYgdBXkzS4+Q02akxFR8838no6I6wXI0hNVXisNb8weRg6Inb6fO0l1ksOAgu1mn3mFnZuFB19swF3RLiLuHWk6oQMJoLnhISq6c6WrMfVroj/xUSkypsubLdtbNkWfWXkQ8RuMaJeorFkrywrjAKHjZMl2M4aAI4G4gB+wfw9mtK/07ety7PK64AUjUsx4HAIaBxY35OGaLg8xhQxnN3vsB4VqPPddgAOInPqNeXfCEVxPdv+7myQ7jvZOpDgJTf+XVdAWhtaLN10b69Iq9POiRPf+Rk0fHsw3p8NVhWgUgG/Lcq0bqpWhBQSjQqTsseRBJfSpde2lisgmJ8tZ0TxbnTSI9mu90/aszDnTfxMOSJ8ng/3B4550vx2zoJNTXkO9+y3qVJ9L2qliUVN9tXb363zEI5CiknSL88nuhhHFYSCjIsoOxq9R3iEiHRRTOLDHNlkhxdP2UzO34c/2S1ChgCegaY9wML+tFzNWPlbbcPwgPROzzLM22SqWgVjsXJ5cshjdQrppqtt/E9LFLrSg+kVz0h15K cw1Aw7aC b1fay9WgbRkLBBwimSO+UtYQawyepuLiVeY2ObB5jtpXFhWItHIyZTZJjcdEunNIiNQjoOAEznp83tn6Ct+PQxmeFsccEySfsgjKezeTlG7x4Yx/9Wg5y+Iv2VZwviHjC5Z3xVDffzY0dzhSkkjEeVtJegfsa8q82kUVS6kkYQbSm/+zEoyMXdl+tA+QxhvoYFhJzudobLAfCh1qkqqBFyCVRXrcuoL2ED8glL4UwaI3oiI15D5gWnwX8WDP/8Hyx0IHXout56f54y0pyv5UULXdIkveGik6jR60HhhKA5Og0KMYuUAFf3v8xh3Bx+IYA1iW0FUkPkE6qb/hwxxwK9Wrc7crqKdGTKzy9+VyV6gSbxVoAzTmK1C0FAMS/+9/Ut5A2UI27ZHNXf3XWYNrVlMICj0CUOcaEz4tCaMo4S9Sh09XndhhZm1De6YjzngtbffzWtTahCodEpV0taMmJQgqUTXGnPQfiXem20Q+ebR5EoLtoD+7uHDyy0AL3iuxc4cgWiOKcVi/jELmXUcclGy0oLyb5m2rcmYGl X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In current PCP auto-tuning design, if the number of pages allocated is much more than that of pages freed on a CPU, the PCP high may become the maximal value even if the allocating/freeing depth is small, for example, in the sender of network workloads. If a CPU was used as sender originally, then it is used as receiver after context switching, we need to fill the whole PCP with maximal high before triggering PCP draining for consecutive high order freeing. This will hurt the performance of some network workloads. To solve the issue, in this patch, we will track the consecutive page freeing with a counter in stead of relying on PCP draining. So, we can detect consecutive page freeing much earlier. On a 2-socket Intel server with 128 logical CPU, we tested SCTP_STREAM_MANY test case of netperf test suite with 64-pair processes. With the patch, the network bandwidth improves 3.1%. This restores the performance drop caused by PCP auto-tuning. Signed-off-by: "Huang, Ying" Cc: Andrew Morton Cc: Mel Gorman Cc: Vlastimil Babka Cc: David Hildenbrand Cc: Johannes Weiner Cc: Dave Hansen Cc: Michal Hocko Cc: Pavel Tatashin Cc: Matthew Wilcox Cc: Christoph Lameter --- include/linux/mmzone.h | 2 +- mm/page_alloc.c | 23 +++++++++++------------ 2 files changed, 12 insertions(+), 13 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 35b78c7522a7..44f6dc3cdeeb 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -689,10 +689,10 @@ struct per_cpu_pages { int batch; /* chunk size for buddy add/remove */ u8 flags; /* protected by pcp->lock */ u8 alloc_factor; /* batch scaling factor during allocate */ - u8 free_factor; /* batch scaling factor during free */ #ifdef CONFIG_NUMA u8 expire; /* When 0, remote pagesets are drained */ #endif + short free_count; /* consecutive free count */ /* Lists of pages, one per migrate type stored on the pcp-lists */ struct list_head lists[NR_PCP_LISTS]; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 77e9b7b51688..6ae2a5ebf7a4 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2375,13 +2375,10 @@ static int nr_pcp_free(struct per_cpu_pages *pcp, int batch, int high, bool free max_nr_free = high - batch; /* - * Double the number of pages freed each time there is subsequent - * freeing of pages without any allocation. + * Increase the batch number to the number of the consecutive + * freed pages to reduce zone lock contention. */ - batch <<= pcp->free_factor; - if (batch <= max_nr_free && pcp->free_factor < PCP_BATCH_SCALE_MAX) - pcp->free_factor++; - batch = clamp(batch, min_nr_free, max_nr_free); + batch = clamp_t(int, pcp->free_count, min_nr_free, max_nr_free); return batch; } @@ -2408,7 +2405,7 @@ static int nr_pcp_high(struct per_cpu_pages *pcp, struct zone *zone, * stored on pcp lists */ if (test_bit(ZONE_RECLAIM_ACTIVE, &zone->flags)) { - pcp->high = max(high - (batch << pcp->free_factor), high_min); + pcp->high = max(high - pcp->free_count, high_min); return min(batch << 2, pcp->high); } @@ -2416,10 +2413,10 @@ static int nr_pcp_high(struct per_cpu_pages *pcp, struct zone *zone, return high; if (test_bit(ZONE_BELOW_HIGH, &zone->flags)) { - pcp->high = max(high - (batch << pcp->free_factor), high_min); + pcp->high = max(high - pcp->free_count, high_min); high = max(pcp->count, high_min); } else if (pcp->count >= high) { - int need_high = (batch << pcp->free_factor) + batch; + int need_high = pcp->free_count + batch; /* pcp->high should be large enough to hold batch freed pages */ if (pcp->high < need_high) @@ -2456,7 +2453,7 @@ static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp, * stops will be drained from vmstat refresh context. */ if (order && order <= PAGE_ALLOC_COSTLY_ORDER) { - free_high = (pcp->free_factor && + free_high = (pcp->free_count >= batch && (pcp->flags & PCPF_PREV_FREE_HIGH_ORDER) && (!(pcp->flags & PCPF_FREE_HIGH_BATCH) || pcp->count >= READ_ONCE(batch))); @@ -2464,6 +2461,8 @@ static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp, } else if (pcp->flags & PCPF_PREV_FREE_HIGH_ORDER) { pcp->flags &= ~PCPF_PREV_FREE_HIGH_ORDER; } + if (pcp->free_count < (batch << PCP_BATCH_SCALE_MAX)) + pcp->free_count += (1 << order); high = nr_pcp_high(pcp, zone, batch, free_high); if (pcp->count >= high) { free_pcppages_bulk(zone, nr_pcp_free(pcp, batch, high, free_high), @@ -2861,7 +2860,7 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone, * See nr_pcp_free() where free_factor is increased for subsequent * frees. */ - pcp->free_factor >>= 1; + pcp->free_count >>= 1; list = &pcp->lists[order_to_pindex(migratetype, order)]; page = __rmqueue_pcplist(zone, order, migratetype, alloc_flags, pcp, list); pcp_spin_unlock(pcp); @@ -5483,7 +5482,7 @@ static void per_cpu_pages_init(struct per_cpu_pages *pcp, struct per_cpu_zonesta pcp->high_min = BOOT_PAGESET_HIGH; pcp->high_max = BOOT_PAGESET_HIGH; pcp->batch = BOOT_PAGESET_BATCH; - pcp->free_factor = 0; + pcp->free_count = 0; } static void __zone_set_pageset_high_and_batch(struct zone *zone, unsigned long high_min,