From patchwork Sat Dec 18 21:20:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 12686299 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 007C9C43219 for ; Sat, 18 Dec 2021 21:22:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234743AbhLRVWe (ORCPT ); Sat, 18 Dec 2021 16:22:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56580 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234561AbhLRVVl (ORCPT ); Sat, 18 Dec 2021 16:21:41 -0500 Received: from mail-oi1-x22b.google.com (mail-oi1-x22b.google.com [IPv6:2607:f8b0:4864:20::22b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 32076C061574; Sat, 18 Dec 2021 13:21:03 -0800 (PST) Received: by mail-oi1-x22b.google.com with SMTP id u74so9343715oie.8; Sat, 18 Dec 2021 13:21:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=ZG1Hs62u68C5ALHAshDdD3Z6hgReTGi0eu0fnhr5AEs=; b=IIMnrKu1E+8J4ZeqIVFH4MIk+q707s4MTfXZMSHu7EsoSEaHXz6Xdcyp2j22iBGRbw WVHVFtLogRqg/mGYdxkJxTg1IZGqZrXfKAFHvN0SWyjvRKWVp2T/DFgrLVfndTJrfq2u OwC9HVCN/kSFoZV+Kt3qvSFSLRRXYLNLYsAXekXr654BC7f14NeaRIzjiQBBILSD3qyj HBL/MEAh3jwy7tfBsIQxRy8T/fhSamJaDFD5J49R6eZuWJv1xKEfJ4rEwC5POkESFzri oaICpv9ciOKT7yj04uxRt9vA6WWjGWqFCOU3UqGCbRh1xAC0FuFqTq7fyF8WHavhh7BP l2Ww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ZG1Hs62u68C5ALHAshDdD3Z6hgReTGi0eu0fnhr5AEs=; b=5u/hHy+YAnou9kt/VIUVER+uHMJuKKgiHplvH49E0vcFOQNXS8nTm13eQ91MHcCGnN 34KiTgu8B0SI9QNDwxk3bxWk2DJNLs4YpNDj1+tq+C8vpXPruKqsGI6MzTd1lT3wxnEe JwjRkBuyGliFfusnPHbeNNRVxhmWLTbgdLWcAtxM/a1iBhyvXWZpoMzDZuHXOWXu7TPE /2D3cwHJfwQQsYKHjxLwXqlG/60xjYDSEJXlDLKbjeahSaeXfr2oYzfAMD9jy89dhxrd zvsxR8/H3/DBTXvbk0uUSSCvjDGr9uVRvr0SPZ4PDqfcmWUfdkrNHwpUCkOY4Y2Iigsm mt6g== X-Gm-Message-State: AOAM5300nYNly3e9QT01IBxUmYlNKnIFjZudVSPMRmQt8MrGVbsDDw24 02EbLLWQ9ici0k2LA8FDg2x3MASP7hALuw== X-Google-Smtp-Source: ABdhPJzk/ppqcCd39lkFC0tyQzboJ0USG4d79V7IO5XKOSDBVZhakYMUNSWF7GIAyJsgsStLI/YQiw== X-Received: by 2002:a54:4e0c:: with SMTP id a12mr6723576oiy.12.1639862462273; Sat, 18 Dec 2021 13:21:02 -0800 (PST) Received: from localhost (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id i29sm2386537ots.49.2021.12.18.13.21.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 18 Dec 2021 13:21:02 -0800 (PST) From: Yury Norov To: linux-kernel@vger.kernel.org, Yury Norov , "James E.J. Bottomley" , "Martin K. Petersen" , =?utf-8?b?TWljaGHFgiBN?= =?utf-8?b?aXJvc8WCYXc=?= , "Paul E. McKenney" , "Rafael J. Wysocki" , Alexander Shishkin , Alexey Klimov , Amitkumar Karwar , Andi Kleen , Andrew Lunn , Andrew Morton , Andy Gross , Andy Lutomirski , Andy Shevchenko , Anup Patel , Ard Biesheuvel , Arnaldo Carvalho de Melo , Arnd Bergmann , Borislav Petkov , Catalin Marinas , Christoph Hellwig , Christoph Lameter , Daniel Vetter , Dave Hansen , David Airlie , David Laight , Dennis Zhou , Emil Renner Berthing , Geert Uytterhoeven , Geetha sowjanya , Greg Kroah-Hartman , Guo Ren , Hans de Goede , Heiko Carstens , Ian Rogers , Ingo Molnar , Jakub Kicinski , Jason Wessel , Jens Axboe , Jiri Olsa , Joe Perches , Jonathan Cameron , Juri Lelli , Kees Cook , Krzysztof Kozlowski , Lee Jones , Marc Zyngier , Marcin Wojtas , Mark Gross , Mark Rutland , Matti Vaittinen , Mauro Carvalho Chehab , Mel Gorman , Michael Ellerman , Mike Marciniszyn , Nicholas Piggin , Palmer Dabbelt , Peter Zijlstra , Petr Mladek , Randy Dunlap , Rasmus Villemoes , Russell King , Saeed Mahameed , Sagi Grimberg , Sergey Senozhatsky , Solomon Peachy , Stephen Boyd , Stephen Rothwell , Steven Rostedt , Subbaraya Sundeep , Sudeep Holla , Sunil Goutham , Tariq Toukan , Tejun Heo , Thomas Bogendoerfer , Thomas Gleixner , Ulf Hansson , Vincent Guittot , Vineet Gupta , Viresh Kumar , Vivien Didelot , Vlastimil Babka , Will Deacon , bcm-kernel-feedback-list@broadcom.com, kvm@vger.kernel.org, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-crypto@vger.kernel.org, linux-csky@vger.kernel.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-perf-users@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-snps-arc@lists.infradead.org, linuxppc-dev@lists.ozlabs.org Subject: [PATCH 14/17] kernel/cpu: add num_present_cpu counter Date: Sat, 18 Dec 2021 13:20:10 -0800 Message-Id: <20211218212014.1315894-15-yury.norov@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211218212014.1315894-1-yury.norov@gmail.com> References: <20211218212014.1315894-1-yury.norov@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org Similarly to the online cpus, the cpu_present_mask is actively used in the kernel. This patch adds a counter for present cpus, so that users that call num_present_cpus() would know the result immediately, instead of calling the bitmap_weight for the mask. Suggested-by: Nicholas Piggin Signed-off-by: Yury Norov --- include/linux/cpumask.h | 25 +++++++++++++++---------- kernel/cpu.c | 16 ++++++++++++++++ 2 files changed, 31 insertions(+), 10 deletions(-) diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h index 0be2504d8e4c..c2a9d15e2cbd 100644 --- a/include/linux/cpumask.h +++ b/include/linux/cpumask.h @@ -100,6 +100,7 @@ extern struct cpumask __cpu_dying_mask; extern atomic_t __num_online_cpus; extern atomic_t __num_possible_cpus; +extern atomic_t __num_present_cpus; extern cpumask_t cpus_booted_once_mask; @@ -873,15 +874,7 @@ void init_cpu_online(const struct cpumask *src); void set_cpu_possible(unsigned int cpu, bool possible); void reset_cpu_possible_mask(void); - -static inline void -set_cpu_present(unsigned int cpu, bool present) -{ - if (present) - cpumask_set_cpu(cpu, &__cpu_present_mask); - else - cpumask_clear_cpu(cpu, &__cpu_present_mask); -} +void set_cpu_present(unsigned int cpu, bool present); void set_cpu_online(unsigned int cpu, bool online); @@ -965,7 +958,19 @@ static inline unsigned int num_possible_cpus(void) { return atomic_read(&__num_possible_cpus); } -#define num_present_cpus() cpumask_weight(cpu_present_mask) + +/** + * num_present_cpus() - Read the number of present CPUs + * + * Despite the fact that __num_present_cpus is of type atomic_t, this + * interface gives only a momentary snapshot and is not protected against + * concurrent CPU hotplug operations unless invoked from a cpuhp_lock held + * region. + */ +static inline unsigned int num_present_cpus(void) +{ + return atomic_read(&__num_present_cpus); +} #define num_active_cpus() cpumask_weight(cpu_active_mask) static inline bool cpu_online(unsigned int cpu) diff --git a/kernel/cpu.c b/kernel/cpu.c index a0a815911173..1f7ea1bdde1a 100644 --- a/kernel/cpu.c +++ b/kernel/cpu.c @@ -2597,6 +2597,9 @@ EXPORT_SYMBOL(__cpu_online_mask); struct cpumask __cpu_present_mask __read_mostly; EXPORT_SYMBOL(__cpu_present_mask); +atomic_t __num_present_cpus __read_mostly; +EXPORT_SYMBOL(__num_present_cpus); + struct cpumask __cpu_active_mask __read_mostly; EXPORT_SYMBOL(__cpu_active_mask); @@ -2609,6 +2612,7 @@ EXPORT_SYMBOL(__num_online_cpus); void init_cpu_present(const struct cpumask *src) { cpumask_copy(&__cpu_present_mask, src); + atomic_set(&__num_present_cpus, cpumask_weight(cpu_present_mask)); } void init_cpu_possible(const struct cpumask *src) @@ -2662,6 +2666,18 @@ void set_cpu_possible(unsigned int cpu, bool possible) } EXPORT_SYMBOL(set_cpu_possible); +void set_cpu_present(unsigned int cpu, bool present) +{ + if (present) { + if (!cpumask_test_and_set_cpu(cpu, &__cpu_present_mask)) + atomic_inc(&__num_present_cpus); + } else { + if (cpumask_test_and_clear_cpu(cpu, &__cpu_present_mask)) + atomic_dec(&__num_present_cpus); + } +} +EXPORT_SYMBOL(set_cpu_present); + /* * Activate the first processor. */