From patchwork Tue May 28 18:54:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Chanudet X-Patchwork-Id: 13677090 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 849FDC25B78 for ; Tue, 28 May 2024 19:01:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:Message-ID:Date:Subject:Cc :To:From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=+3nIqjuCx8cfh/RTGedABR4tZB07dzBt8HuI2R5QLYQ=; b=iNqQ5vAqEjKpqX w9/x6dLFBEpc36GkhZrifxSSqZVPbwbnnsMEQd54mJMqpOGPqy3fOcdb3UPGDig3szNhCZB+qsRah jeSBifX1MSxs713/QvNwTPkK6UneCloXX6Fc0YybajQEU8dzCW685TogH8hHkUjQV68A7tRbhkaFQ tQ4Lx2W+nXqb/4Uv3CuJk7Otk+ef44DHO31xKaCMpYwy2S07XC8pSTcf5agMfBWsE4FM6Y2pyATDO GcSKObfyIl6SgEk57nZHMadCCk2fV7w6rn+tUcmclCkqenHy54CCecC1PIiinsseyV7E9hbql8JEO P7Y194bjqSbD6dzkpMYQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sC23z-00000001j6M-3SjT; Tue, 28 May 2024 19:00:47 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sC23w-00000001j4c-2YJw for linux-arm-kernel@lists.infradead.org; Tue, 28 May 2024 19:00:46 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1716922841; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=EroyhAMtYZlwFiF+i06GpFh0h6zXLI4AvXcFdjRTODk=; b=WE/CSpISAN2XMbpdDZUEbE6eUfsq/jFIUAMb5ZN8Vm1o8573Nrv5kCC2JYV9ZIptHYQET4 bFicKq3X/3JnfpUGmzVsHOWBE6cbFK5861fWZZHYiqwKoTXqR02Ap06W5zVb8+Vc1HAZ4m jsRhZULhWMcDqno8Yj3kxO95ejh/+lU= Received: from mail-qt1-f197.google.com (mail-qt1-f197.google.com [209.85.160.197]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-642-6c5c8Z8OMY65_p7e29UH9w-1; Tue, 28 May 2024 15:00:39 -0400 X-MC-Unique: 6c5c8Z8OMY65_p7e29UH9w-1 Received: by mail-qt1-f197.google.com with SMTP id d75a77b69052e-43fc7d851f5so12374001cf.0 for ; Tue, 28 May 2024 12:00:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716922838; x=1717527638; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=EroyhAMtYZlwFiF+i06GpFh0h6zXLI4AvXcFdjRTODk=; b=XMwyuINnUMW+BTox8ntJ9r0iGPoRjIWfxiDiPx7byFI3Q09HQk0WzyAkFgZY/7wUyA kGQqU3aKDPgUGYeANQt/OdKcGknmfu+8sJFvFTIh5N8d96m60CPLv7VJd/RVC9bLyTg/ gqoterc2LLI/sIWcwTarQg6W4gVmDZl5trElaGCI9TWWrBkXvaN99XGLZGbsPmAky2u1 NhUXvbOwJOLA7gmg6jTZXwC5Vo69/lERLTR+gaCZhAa6foCMFlJJ7Mw2xnPIbucz3IsJ HpFtjKz78LR8ZwyP1kVzsKBbCCAgeiFbAB3V7isSto/9kRjdCzBzPqu9BMj137xJJjV3 B1Yg== X-Forwarded-Encrypted: i=1; AJvYcCVU/Um6Jt+2JqcEkMj3fnhuHElMAslimi3AJnCG067sLIclwayEb0a82sfzHhJt/ypNk5D75aSNSkcGoMx5PK3lEMZwC8xiKY4wvz1W4lweQFFrnTQ= X-Gm-Message-State: AOJu0Yy7bPSBirJi4YopYK3YbqiZCHVkYzLJZV7Qf69fiLn4iz8cx+JC z/3LVUHmPcISHzoDsSOXSZgSvko3UCiOR5bmTez6kkfPC+2zMNw8gQVpLnvPi52Iy0x59RoTAZl kxfJ2/6TJYZ+U7bZ+W+jnKydCu1LBU7LJ5E8yCAFxUVH/WIA+88Cnh29BZlhfuKFwddWoRn9T X-Received: by 2002:a05:622a:10a:b0:43e:34ea:80ee with SMTP id d75a77b69052e-43fb0e8d809mr134383981cf.29.1716922837929; Tue, 28 May 2024 12:00:37 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHLHIs0G4ODwLtaTgMc9fP7BM4N2EDamkPS2f5rSvXhQ/RuodqmrywBv65gQp5ZlO5A4PV4Ug== X-Received: by 2002:a05:622a:10a:b0:43e:34ea:80ee with SMTP id d75a77b69052e-43fb0e8d809mr134383451cf.29.1716922837199; Tue, 28 May 2024 12:00:37 -0700 (PDT) Received: from localhost (pool-71-184-142-128.bstnma.fios.verizon.net. [71.184.142.128]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-43fb17f69ddsm44981881cf.41.2024.05.28.12.00.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 May 2024 12:00:36 -0700 (PDT) From: Eric Chanudet To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , Mike Rapoport , Andrew Morton , Baoquan He , Michael Ellerman , Nick Piggin Cc: x86@kernel.org, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-s390@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Eric Chanudet Subject: [PATCH v3] mm/mm_init: use node's number of cpus in deferred_page_init_max_threads Date: Tue, 28 May 2024 14:54:58 -0400 Message-ID: <20240528185455.643227-4-echanude@redhat.com> X-Mailer: git-send-email 2.44.0 MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240528_120044_803135_D3BC9112 X-CRM114-Status: GOOD ( 21.61 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When DEFERRED_STRUCT_PAGE_INIT=y, use a node's cpu count as maximum thread count for the deferred initialization of struct pages via padata. This should result in shorter boot times for these configurations by going through page_alloc_init_late() faster as systems tend not to be under heavy load that early in the bootstrap. Only x86_64 does that now. Make it archs agnostic when DEFERRED_STRUCT_PAGE_INIT is set. With the default defconfigs, that includes powerpc and s390. It used to be so before offering archs to override the function for tuning with commit ecd096506922 ("mm: make deferred init's max threads arch-specific"). Setting DEFERRED_STRUCT_PAGE_INIT and testing on a few arm64 platforms shows faster deferred_init_memmap completions: | | x13s | SA8775p-ride | Ampere R137-P31 | Ampere HR330 | | | Metal, 32GB | VM, 36GB | VM, 58GB | Metal, 128GB | | | 8cpus | 8cpus | 8cpus | 32cpus | |---------|-------------|--------------|-----------------|--------------| | threads | ms (%) | ms (%) | ms (%) | ms (%) | |---------|-------------|--------------|-----------------|--------------| | 1 | 108 (0%) | 72 (0%) | 224 (0%) | 324 (0%) | | cpus | 24 (-77%) | 36 (-50%) | 40 (-82%) | 56 (-82%) | Michael Ellerman on a powerpc machine (1TB, 40 cores, 4KB pages) reports faster deferred_init_memmap from 210-240ms to 90-110ms between nodes. Signed-off-by: Eric Chanudet Tested-by: Michael Ellerman (powerpc) Acked-by: Mike Rapoport (IBM) Acked-by: Alexander Gordeev Reviewed-by: Baoquan He --- - v1: https://lore.kernel.org/linux-arm-kernel/20240520231555.395979-5-echanude@redhat.com - Changes since v1: - Make the generic function return the number of cpus of the node as max threads limit instead overriding it for arm64. - Drop Baoquan He's R-b on v1 since the logic changed. - Add CCs according to patch changes (ppc and s390 set DEFERRED_STRUCT_PAGE_INIT by default). - v2: https://lore.kernel.org/linux-arm-kernel/20240522203758.626932-4-echanude@redhat.com/ - Changes since v2: - deferred_page_init_max_threads returns unsigned and use max instead of max_t. - Make deferred_page_init_max_threads static since there are no more override. - Rephrase description. - Add T-b and report from Michael Ellerman. arch/x86/mm/init_64.c | 12 ------------ include/linux/memblock.h | 2 -- mm/mm_init.c | 5 ++--- 3 files changed, 2 insertions(+), 17 deletions(-) diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 7e177856ee4f..adec42928ec1 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1354,18 +1354,6 @@ void __init mem_init(void) preallocate_vmalloc_pages(); } -#ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT -int __init deferred_page_init_max_threads(const struct cpumask *node_cpumask) -{ - /* - * More CPUs always led to greater speedups on tested systems, up to - * all the nodes' CPUs. Use all since the system is otherwise idle - * now. - */ - return max_t(int, cpumask_weight(node_cpumask), 1); -} -#endif - int kernel_set_to_readonly; void mark_rodata_ro(void) diff --git a/include/linux/memblock.h b/include/linux/memblock.h index e2082240586d..40c62aca36ec 100644 --- a/include/linux/memblock.h +++ b/include/linux/memblock.h @@ -335,8 +335,6 @@ void __next_mem_pfn_range_in_zone(u64 *idx, struct zone *zone, for (; i != U64_MAX; \ __next_mem_pfn_range_in_zone(&i, zone, p_start, p_end)) -int __init deferred_page_init_max_threads(const struct cpumask *node_cpumask); - #endif /* CONFIG_DEFERRED_STRUCT_PAGE_INIT */ /** diff --git a/mm/mm_init.c b/mm/mm_init.c index f72b852bd5b8..acfeba508796 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -2122,11 +2122,10 @@ deferred_init_memmap_chunk(unsigned long start_pfn, unsigned long end_pfn, } } -/* An arch may override for more concurrency. */ -__weak int __init +static unsigned int __init deferred_page_init_max_threads(const struct cpumask *node_cpumask) { - return 1; + return max(cpumask_weight(node_cpumask), 1U); } /* Initialise remaining memory on a node */