From patchwork Fri Sep 1 13:03:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 13372597 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EBFB5CA0FE1 for ; Fri, 1 Sep 2023 13:03:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=9uDdT1ivwpllIQwWGDm/ZzcEAZB8BX/zNZ/YqoqGvVM=; b=nGkhHH0WYfk5pV WE7IOhsnq3yjiuUX+8YlbUDs2M24i2vh0l9OG52vOymdH2pJ0LsFStzqGlogTjYWTl8aIvZ33zxqW VdPRhG0BT/yR9qRNmjZLlsJ5gsM4MaVBWqjHoYsvkUX1rttsVsIsmtZkFXDWYUBppSIUy9QRJ2p5Z yfVDH7CWndhvhHPRoH/r6VxDU0+S8QsqAxSXjrsBQvr7AJi7FYz7hv6Brse0Ed9DcIjBGQNZL0fsV MtFFocpzbo6vCBXaizLylwLtS1NLilNADXBT6zo+0vYRIi5FmZIxAfY1i7HW77rzc1yqzzY+mdFbu KoW2ZV/uPvxR1iQEqu7Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qc3o5-00HXJ9-02; Fri, 01 Sep 2023 13:03:25 +0000 Received: from mail-wr1-x430.google.com ([2a00:1450:4864:20::430]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qc3o1-00HXFv-0b for linux-riscv@lists.infradead.org; Fri, 01 Sep 2023 13:03:23 +0000 Received: by mail-wr1-x430.google.com with SMTP id ffacd0b85a97d-31c73c21113so1593858f8f.1 for ; Fri, 01 Sep 2023 06:03:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1693573399; x=1694178199; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=3XGN82il/1YBgFB1xd1FuzsdVFxN2VHyBDMO8DeBOOo=; b=LD5+Cy99UFAqFMeaJo8bGeEp8vg6ddTAUvnLB7+8TCzGD2WNSswoxBY2vSUGhAwgLu rIgv6FeflmOde1kcujKDb+YWhmnhfl55zPUgKokPSMJphgMugXTxcXVUSTvr4qP2Mez0 OZhs9YxaT5KRTeTdJXyYx71iqxo7EEvokc9/ylWMSEoczyintHA8ov/D3vwG46vncx7T uwEJ8chl7sYiIsYGIsnmd+7ZEVLbThv58Lob1CbuF+eYfAENNYQ5KGMjM8xvDTWjwmix eA7JV0u9y8PrhSzUEwJEfj5ycKybdjh4dK6hk8jF+S5jP8GB4rmcLZhFdb/Qk1KT7Z+5 q5uA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1693573399; x=1694178199; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=3XGN82il/1YBgFB1xd1FuzsdVFxN2VHyBDMO8DeBOOo=; b=iuBFFGUCvw4Q8zeRGC6YPo/BBz7MFtdYzfXRd7K4QK2E7Sifh1va0Xh8gvs78H2UQj NMK4d302gibGmcA/ynUMbKryiR48fdXu840SGCC88woqTzzE+alU4j6GZXQ6gBDfIMOZ DgA4q7sTsld0gRwgxsopEpfQRCEGflT4vLnvFZpGAg2M5CxpA+J2WAxszgf29XIUBzdB 2XLUU11xLIKLJ5l28TQDu7nuT/4sBG6FwfyQW6GLNK6e9Gh0uXzDPECLf7YG9sToj8z6 Jj2dfP3+aRfK/U+Piu5YCsNAhLT+fN3ZygAnuQWqTFjJSn9MbKtxYF/XaJ619AJ+lm6B LQ9w== X-Gm-Message-State: AOJu0YzJKuFxU1VnLFVMcGBG6vXLb18fHaOHKIMFJjR/gtUi44o/itrb dGu8tvtUgbaYXOge04Rt913jFw== X-Google-Smtp-Source: AGHT+IERLKj3sQli7kquAJo1LxhyV8dn8e2ozcL/AJpoVa8ta4GyQxSWCkp906EBGCj0hP1nYCxKOQ== X-Received: by 2002:adf:f282:0:b0:317:5d3d:c9df with SMTP id k2-20020adff282000000b003175d3dc9dfmr1892285wro.18.1693573397559; Fri, 01 Sep 2023 06:03:17 -0700 (PDT) Received: from vingu-book.. ([2a01:e0a:f:6020:e9bd:add1:d9ac:7b3e]) by smtp.gmail.com with ESMTPSA id i14-20020adfdece000000b003142e438e8csm5167452wrn.26.2023.09.01.06.03.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 01 Sep 2023 06:03:16 -0700 (PDT) From: Vincent Guittot To: linux@armlinux.org.uk, catalin.marinas@arm.com, will@kernel.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, sudeep.holla@arm.com, gregkh@linuxfoundation.org, rafael@kernel.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, viresh.kumar@linaro.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, linux-pm@vger.kernel.org Cc: conor.dooley@microchip.com, suagrfillet@gmail.com, ajones@ventanamicro.com, lftan@kernel.org, Vincent Guittot Subject: [PATCH 1/4] sched: consolidate and cleanup access to CPU's max compute capacity Date: Fri, 1 Sep 2023 15:03:09 +0200 Message-Id: <20230901130312.247719-2-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230901130312.247719-1-vincent.guittot@linaro.org> References: <20230901130312.247719-1-vincent.guittot@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230901_060321_226093_A4C32A7E X-CRM114-Status: GOOD ( 25.45 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Remove struct rq cpu_capacity_orig field and use arch_scale_cpu_capacity() instead. Scheduler uses 3 methods to get access to the CPU's max compute capacity: - arch_scale_cpu_capacity(cpu) which is the default way to get CPU's capacity. - cpu_capacity_orig field which is periodically updated with arch_scale_cpu_capacity(). - capacity_orig_of(cpu) which encapsulates rq->cpu_capacity_orig There is no real need to save the value returned by arch_scale_cpu_capacity() in struct rq. arch_scale_cpu_capacity() returns: - either a per_cpu variable. - or a const value for systems which have only one capacity. Remove cpu_capacity_orig and use arch_scale_cpu_capacity() everywhere. No functional changes. some tests of Arm64: small SMP device (hikey): no noticeable changes HMP device (RB5): hackbench shows minor improvement (1-2%) large smp (thx2): hackbench and tbench shows minor improvement (1%) Signed-off-by: Vincent Guittot Reviewed-by: Dietmar Eggemann --- kernel/sched/core.c | 2 +- kernel/sched/cpudeadline.c | 2 +- kernel/sched/deadline.c | 4 ++-- kernel/sched/fair.c | 18 ++++++++---------- kernel/sched/rt.c | 2 +- kernel/sched/sched.h | 6 ------ kernel/sched/topology.c | 7 +++++-- 7 files changed, 18 insertions(+), 23 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index efe3848978a0..6560392f2f83 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -10014,7 +10014,7 @@ void __init sched_init(void) #ifdef CONFIG_SMP rq->sd = NULL; rq->rd = NULL; - rq->cpu_capacity = rq->cpu_capacity_orig = SCHED_CAPACITY_SCALE; + rq->cpu_capacity = SCHED_CAPACITY_SCALE; rq->balance_callback = &balance_push_callback; rq->active_balance = 0; rq->next_balance = jiffies; diff --git a/kernel/sched/cpudeadline.c b/kernel/sched/cpudeadline.c index 57c92d751bcd..95baa12a1029 100644 --- a/kernel/sched/cpudeadline.c +++ b/kernel/sched/cpudeadline.c @@ -131,7 +131,7 @@ int cpudl_find(struct cpudl *cp, struct task_struct *p, if (!dl_task_fits_capacity(p, cpu)) { cpumask_clear_cpu(cpu, later_mask); - cap = capacity_orig_of(cpu); + cap = arch_scale_cpu_capacity(cpu); if (cap > max_cap || (cpu == task_cpu(p) && cap == max_cap)) { diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 58b542bf2893..c57ef2e0db41 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -132,7 +132,7 @@ static inline unsigned long __dl_bw_capacity(const struct cpumask *mask) int i; for_each_cpu_and(i, mask, cpu_active_mask) - cap += capacity_orig_of(i); + cap += arch_scale_cpu_capacity(i); return cap; } @@ -144,7 +144,7 @@ static inline unsigned long __dl_bw_capacity(const struct cpumask *mask) static inline unsigned long dl_bw_capacity(int i) { if (!sched_asym_cpucap_active() && - capacity_orig_of(i) == SCHED_CAPACITY_SCALE) { + arch_scale_cpu_capacity(i) == SCHED_CAPACITY_SCALE) { return dl_bw_cpus(i) << SCHED_CAPACITY_SHIFT; } else { RCU_LOCKDEP_WARN(!rcu_read_lock_sched_held(), diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 0b7445cd5af9..06d6d0dde48a 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4690,7 +4690,7 @@ static inline void util_est_update(struct cfs_rq *cfs_rq, * To avoid overestimation of actual task utilization, skip updates if * we cannot grant there is idle time in this CPU. */ - if (task_util(p) > capacity_orig_of(cpu_of(rq_of(cfs_rq)))) + if (task_util(p) > arch_scale_cpu_capacity(cpu_of(rq_of(cfs_rq)))) return; /* @@ -4738,14 +4738,14 @@ static inline int util_fits_cpu(unsigned long util, return fits; /* - * We must use capacity_orig_of() for comparing against uclamp_min and + * We must use arch_scale_cpu_capacity() for comparing against uclamp_min and * uclamp_max. We only care about capacity pressure (by using * capacity_of()) for comparing against the real util. * * If a task is boosted to 1024 for example, we don't want a tiny * pressure to skew the check whether it fits a CPU or not. * - * Similarly if a task is capped to capacity_orig_of(little_cpu), it + * Similarly if a task is capped to arch_scale_cpu_capacity(little_cpu), it * should fit a little cpu even if there's some pressure. * * Only exception is for thermal pressure since it has a direct impact @@ -4757,7 +4757,7 @@ static inline int util_fits_cpu(unsigned long util, * For uclamp_max, we can tolerate a drop in performance level as the * goal is to cap the task. So it's okay if it's getting less. */ - capacity_orig = capacity_orig_of(cpu); + capacity_orig = arch_scale_cpu_capacity(cpu); capacity_orig_thermal = capacity_orig - arch_scale_thermal_pressure(cpu); /* @@ -7226,7 +7226,7 @@ select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target) * Look for the CPU with best capacity. */ else if (fits < 0) - cpu_cap = capacity_orig_of(cpu) - thermal_load_avg(cpu_rq(cpu)); + cpu_cap = arch_scale_cpu_capacity(cpu) - thermal_load_avg(cpu_rq(cpu)); /* * First, select CPU which fits better (-1 being better than 0). @@ -7468,7 +7468,7 @@ cpu_util(int cpu, struct task_struct *p, int dst_cpu, int boost) util = max(util, util_est); } - return min(util, capacity_orig_of(cpu)); + return min(util, arch_scale_cpu_capacity(cpu)); } unsigned long cpu_util_cfs(int cpu) @@ -9251,8 +9251,6 @@ static void update_cpu_capacity(struct sched_domain *sd, int cpu) unsigned long capacity = scale_rt_capacity(cpu); struct sched_group *sdg = sd->groups; - cpu_rq(cpu)->cpu_capacity_orig = arch_scale_cpu_capacity(cpu); - if (!capacity) capacity = 1; @@ -9328,7 +9326,7 @@ static inline int check_cpu_capacity(struct rq *rq, struct sched_domain *sd) { return ((rq->cpu_capacity * sd->imbalance_pct) < - (rq->cpu_capacity_orig * 100)); + (arch_scale_cpu_capacity(cpu_of(rq)) * 100)); } /* @@ -9339,7 +9337,7 @@ check_cpu_capacity(struct rq *rq, struct sched_domain *sd) static inline int check_misfit_status(struct rq *rq, struct sched_domain *sd) { return rq->misfit_task_load && - (rq->cpu_capacity_orig < rq->rd->max_cpu_capacity || + (arch_scale_cpu_capacity(rq->cpu) < rq->rd->max_cpu_capacity || check_cpu_capacity(rq, sd)); } diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 0597ba0f85ff..8f4e8db6e234 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -515,7 +515,7 @@ static inline bool rt_task_fits_capacity(struct task_struct *p, int cpu) min_cap = uclamp_eff_value(p, UCLAMP_MIN); max_cap = uclamp_eff_value(p, UCLAMP_MAX); - cpu_cap = capacity_orig_of(cpu); + cpu_cap = arch_scale_cpu_capacity(cpu); return cpu_cap >= min(min_cap, max_cap); } diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 3a01b7a2bf66..17ae151e90c0 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1048,7 +1048,6 @@ struct rq { struct sched_domain __rcu *sd; unsigned long cpu_capacity; - unsigned long cpu_capacity_orig; struct balance_callback *balance_callback; @@ -2974,11 +2973,6 @@ static inline void cpufreq_update_util(struct rq *rq, unsigned int flags) {} #endif #ifdef CONFIG_SMP -static inline unsigned long capacity_orig_of(int cpu) -{ - return cpu_rq(cpu)->cpu_capacity_orig; -} - /** * enum cpu_util_type - CPU utilization type * @FREQUENCY_UTIL: Utilization used to select frequency diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 05a5bc678c08..e6b0b6a8e60a 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -2479,12 +2479,15 @@ build_sched_domains(const struct cpumask *cpu_map, struct sched_domain_attr *att /* Attach the domains */ rcu_read_lock(); for_each_cpu(i, cpu_map) { + unsigned long capacity; + rq = cpu_rq(i); sd = *per_cpu_ptr(d.sd, i); + capacity = arch_scale_cpu_capacity(i); /* Use READ_ONCE()/WRITE_ONCE() to avoid load/store tearing: */ - if (rq->cpu_capacity_orig > READ_ONCE(d.rd->max_cpu_capacity)) - WRITE_ONCE(d.rd->max_cpu_capacity, rq->cpu_capacity_orig); + if (capacity > READ_ONCE(d.rd->max_cpu_capacity)) + WRITE_ONCE(d.rd->max_cpu_capacity, capacity); cpu_attach_domain(sd, d.rd, i); }