From patchwork Wed Feb 26 07:49:16 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Limonciello X-Patchwork-Id: 13991650 X-Patchwork-Delegate: mario.limonciello@amd.com Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CCF2D26D5B4; Wed, 26 Feb 2025 07:49:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740556183; cv=none; b=W4rtcShOkTP17Jge44JlXV5Abz56aTA9IfCNJqwJHGqzI6hzcc1bFAGJCHJD6+p/DYZ6YTyzlFcPv7kz05Wm/G7A5SIkqj1lbstMXF8Vr0PlkfJ9QcCg23911Q/KdEwc8x0ARKfnGh6L6IhKYh2s6TLBKOp0bWTOPnT8XEqBj7E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740556183; c=relaxed/simple; bh=BrOubcwuEEN6CSCwegHziTWdKo7Rf4NmB0rVPPiVqIg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=W229TWXKJ7C+EqVkxwBlYmZcHtsBT4XLaI0XgsnyBdyY5jMPj67wGJe4uPR2EEfYcbCxejv3y7Opm4EdhbkaOBU5ICJNlIy21fMz1hVsoe4Op68FS4t4BrLdwGNNEY/NQSS10bDIuWdAqzlDpd5l+ZZd602KkFxXx9W4YqdCpkE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=eB80VaoQ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="eB80VaoQ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 29162C4CEE7; Wed, 26 Feb 2025 07:49:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1740556183; bh=BrOubcwuEEN6CSCwegHziTWdKo7Rf4NmB0rVPPiVqIg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=eB80VaoQMVDCzHU/O0qH7o2oYKcijTkjGlrpPTZbdKSWKdn6bKpmrdTSdi4MpzN1e OjeSzAzM+DVxPTVFK5P2JGs0E7RBWqxtJQUSAU4hTblZ+F/M/vBK+F8ej/zdqk0QFL h6LlMIg+5plHR7UHct5tNVPZlhDUx0FqUxa+lhKaO10s4m+noIk3MG67KeNJydpu0j w7fUgvoUNAJ1ksOr3yBnwT7GKRn4Hy3ghD/Ih5X+JcUShrupsIr3xpe5ImSjy+F/Z1 m0gFkesnAItnfb7ddiCTzY8a11mHwsBnqOqFNN4MS4RDqk7Oh1rg2wvk8tDtm6NWTW gpDJfdD018uJw== From: Mario Limonciello To: "Gautham R . Shenoy" , Perry Yuan Cc: Dhananjay Ugwekar , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), linux-pm@vger.kernel.org (open list:CPU FREQUENCY SCALING FRAMEWORK), Mario Limonciello , Dhananjay Ugwekar , Miroslav Pavleski Subject: [PATCH v5 01/19] cpufreq/amd-pstate: Invalidate cppc_req_cached during suspend Date: Wed, 26 Feb 2025 01:49:16 -0600 Message-ID: <20250226074934.1667721-2-superm1@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250226074934.1667721-1-superm1@kernel.org> References: <20250226074934.1667721-1-superm1@kernel.org> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Mario Limonciello During resume it's possible the firmware didn't restore the CPPC request MSR but the kernel thinks the values line up. This leads to incorrect performance after resume from suspend. To fix the issue invalidate the cached value at suspend. During resume use the saved values programmed as cached limits. Reviewed-by: Gautham R. Shenoy Reviewed-by: Dhananjay Ugwekar Reported-by: Miroslav Pavleski Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217931 Signed-off-by: Mario Limonciello --- drivers/cpufreq/amd-pstate.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c index dc38ef4c8f725..a093389a8fe3e 100644 --- a/drivers/cpufreq/amd-pstate.c +++ b/drivers/cpufreq/amd-pstate.c @@ -1611,7 +1611,7 @@ static int amd_pstate_epp_reenable(struct cpufreq_policy *policy) max_perf, policy->boost_enabled); } - return amd_pstate_update_perf(cpudata, 0, 0, max_perf, cpudata->epp_cached, false); + return amd_pstate_epp_update_limit(policy); } static int amd_pstate_epp_cpu_online(struct cpufreq_policy *policy) @@ -1660,6 +1660,9 @@ static int amd_pstate_epp_suspend(struct cpufreq_policy *policy) if (cppc_state != AMD_PSTATE_ACTIVE) return 0; + /* invalidate to ensure it's rewritten during resume */ + cpudata->cppc_req_cached = 0; + /* set this flag to avoid setting core offline*/ cpudata->suspended = true; From patchwork Wed Feb 26 07:49:17 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Limonciello X-Patchwork-Id: 13991651 X-Patchwork-Delegate: mario.limonciello@amd.com Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D885226E17B; Wed, 26 Feb 2025 07:49:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740556184; cv=none; b=A9WsftOnN/jhxYagGr3kPRRf/kEuja9SplpgXl/ybKhZs6wZiUd5Y6K10AFIGIkNmERSGYrbCekFd2yAd7XcH/mw3ohOl6kfLIkE7vDXBLd58umrAkQmK2mN6kuGlV4SxeOaDwxWGqvTgsGhwwjExuyQAxaTOI7IbkbnEpxE/cM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740556184; c=relaxed/simple; bh=rUUQurfX6uqsZ9IY8GLd9ULbG0hTJFoFjvGUQaHRXMg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=SiEsZY12Tc1a6HWDHekpZ7F3sGy8gdgXASZ/s5xC2lW+MzuIIp0/6aTYPR1FtyTMwelcGdUc7VL4a+4gfch/zaNjeimliPNGFOg4dw4v3jX37uEZdsfDu+96KCoAqBOLXtTXlhZNRp3ZCbAGG14b1Jkq6jUbo8HM1LjLIF2cc2Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=lYiki/zG; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="lYiki/zG" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A9B8FC4CEE2; Wed, 26 Feb 2025 07:49:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1740556184; bh=rUUQurfX6uqsZ9IY8GLd9ULbG0hTJFoFjvGUQaHRXMg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lYiki/zGaGrf9w5yA7PlmRS+3QRoNNzDQ8/DL4UZtp2m8nTTBQQZWvgUsR++BOo8b DNtuwsKpGMtLyFQwrvIyt6jP/3CILWIfhE7hEmqJjw/WTsjh+PAMAFMLJ5MLjps/AS HaWa2I3fi+HQ+1WhSn6ZjMN56mMDuPT0+bPdKSipAASfVDgopEyDuW49cJtnS1d3q6 TO9KdCppw5b7OxqNbXwpOqp3/oNN+TQ5MPGJNFW9uR9lml+MnRMx+NppY45EEJdT9+ A8+/yeeoWrMT5efUpaxQiAdo2ke0Hf6dAHHo7wZIqbWwNBc99CMkoL3r7MC/lLmdsA +BVgySQHMXDaQ== From: Mario Limonciello To: "Gautham R . Shenoy" , Perry Yuan Cc: Dhananjay Ugwekar , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), linux-pm@vger.kernel.org (open list:CPU FREQUENCY SCALING FRAMEWORK), Mario Limonciello , Dhananjay Ugwekar Subject: [PATCH v5 02/19] cpufreq/amd-pstate: Show a warning when a CPU fails to setup Date: Wed, 26 Feb 2025 01:49:17 -0600 Message-ID: <20250226074934.1667721-3-superm1@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250226074934.1667721-1-superm1@kernel.org> References: <20250226074934.1667721-1-superm1@kernel.org> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Mario Limonciello I came across a system that MSR_AMD_CPPC_CAP1 for some CPUs isn't populated. This is an unexpected behavior that is most likely a BIOS bug. In the event it happens I'd like users to report bugs to properly root cause and get this fixed. Reviewed-by: Gautham R. Shenoy Reviewed-by: Dhananjay Ugwekar Signed-off-by: Mario Limonciello --- drivers/cpufreq/amd-pstate.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c index a093389a8fe3e..1b98f5d76894d 100644 --- a/drivers/cpufreq/amd-pstate.c +++ b/drivers/cpufreq/amd-pstate.c @@ -1034,6 +1034,7 @@ static int amd_pstate_cpu_init(struct cpufreq_policy *policy) free_cpudata2: freq_qos_remove_request(&cpudata->req[0]); free_cpudata1: + pr_warn("Failed to initialize CPU %d: %d\n", policy->cpu, ret); kfree(cpudata); return ret; } @@ -1527,6 +1528,7 @@ static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy) return 0; free_cpudata1: + pr_warn("Failed to initialize CPU %d: %d\n", policy->cpu, ret); kfree(cpudata); return ret; } From patchwork Wed Feb 26 07:49:18 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Limonciello X-Patchwork-Id: 13991652 X-Patchwork-Delegate: mario.limonciello@amd.com Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4C46626F455; Wed, 26 Feb 2025 07:49:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740556186; cv=none; b=qNCVwVTC1wuGKg4AISI4dfy0Kx2or3Phj3TRZQk/+00Exze8OoLfavDHM6oy42QzGKgwQF1B8AYqF2YuZ2+SlSiEBVYPviCw+j0Jt+Yxt140OxT/KFxRfSDWHrSghW4tmODAg4Xh3FqhQkOAGN5ia/JonSNVej5mA+Whmy45cvY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740556186; c=relaxed/simple; bh=+HSwYryLgJuFS/PerQiTbC2rJtMVtZPYt+ffFn0P+Ws=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=fZ1x6KYXzTFUdIGiPoQxKSO4504s2HMsnWIb7yolBW8kDs94DMXdODff05ZzxlJpYQuHDaPLA/98pPy9lzGnaigQs6aNiS6oSusJgr+DTvJb2u0TifsdEhTEtIAwpkEZwMbd21V+wJPH9GjGl7coP6YZOvxLYMN/F6Zpo9SmgJ8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=SAlPnMOj; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="SAlPnMOj" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1049FC4CEE9; Wed, 26 Feb 2025 07:49:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1740556186; bh=+HSwYryLgJuFS/PerQiTbC2rJtMVtZPYt+ffFn0P+Ws=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SAlPnMOjLnWfG0EVu/mIg42KLcLavLgGt7YNOzqIrzwnFwg+YYa6ObikKPRPwrm28 QImgUa4bOauO7gLlgYIkTd7B4qGhyg8i5s6YLV/tvK4kQ/SsCz9+HObPg+liN1TKth cUU65qYoBStdF1z5fuTDpYEk10/8v63a4oZ6a9yVYKS1H++KjuKh8lDRoTnMseqN0c jybjMMT9TEjie6B0gC4eOHQjxyiVQBucRXjiImDcvbOEMuASlrAYz4dc8q5nKfuOQ4 2bH6n5msxFzevH6tz76+FbSdEe91Ww7D/rTJS+pAx6Yjjmy4GVuX55ZeH6SzMtA+1X 1yucOxbeS3g2Q== From: Mario Limonciello To: "Gautham R . Shenoy" , Perry Yuan Cc: Dhananjay Ugwekar , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), linux-pm@vger.kernel.org (open list:CPU FREQUENCY SCALING FRAMEWORK), Mario Limonciello , Dhananjay Ugwekar Subject: [PATCH v5 03/19] cpufreq/amd-pstate: Drop min and max cached frequencies Date: Wed, 26 Feb 2025 01:49:18 -0600 Message-ID: <20250226074934.1667721-4-superm1@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250226074934.1667721-1-superm1@kernel.org> References: <20250226074934.1667721-1-superm1@kernel.org> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Mario Limonciello Use the perf_to_freq helpers to calculate this on the fly. As the members are no longer cached add an extra check into amd_pstate_epp_update_limit() to avoid unnecessary calls in amd_pstate_update_min_max_limit(). Reviewed-by: Gautham R. Shenoy Reviewed-by: Dhananjay Ugwekar Signed-off-by: Mario Limonciello --- v5: * add tag * Update commit message v4: * Avoid some unnecessary changes to amd_pstate_init_freq() * Add tag v3: * Fix calc error for min_freq v2: * Keep cached limits --- drivers/cpufreq/amd-pstate-ut.c | 14 +++++------ drivers/cpufreq/amd-pstate.c | 43 +++++++++------------------------ drivers/cpufreq/amd-pstate.h | 9 ++----- 3 files changed, 20 insertions(+), 46 deletions(-) diff --git a/drivers/cpufreq/amd-pstate-ut.c b/drivers/cpufreq/amd-pstate-ut.c index 3a0a380c3590c..445278cf40b61 100644 --- a/drivers/cpufreq/amd-pstate-ut.c +++ b/drivers/cpufreq/amd-pstate-ut.c @@ -214,14 +214,14 @@ static void amd_pstate_ut_check_freq(u32 index) break; cpudata = policy->driver_data; - if (!((cpudata->max_freq >= cpudata->nominal_freq) && + if (!((policy->cpuinfo.max_freq >= cpudata->nominal_freq) && (cpudata->nominal_freq > cpudata->lowest_nonlinear_freq) && - (cpudata->lowest_nonlinear_freq > cpudata->min_freq) && - (cpudata->min_freq > 0))) { + (cpudata->lowest_nonlinear_freq > policy->cpuinfo.min_freq) && + (policy->cpuinfo.min_freq > 0))) { amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL; pr_err("%s cpu%d max=%d >= nominal=%d > lowest_nonlinear=%d > min=%d > 0, the formula is incorrect!\n", - __func__, cpu, cpudata->max_freq, cpudata->nominal_freq, - cpudata->lowest_nonlinear_freq, cpudata->min_freq); + __func__, cpu, policy->cpuinfo.max_freq, cpudata->nominal_freq, + cpudata->lowest_nonlinear_freq, policy->cpuinfo.min_freq); goto skip_test; } @@ -233,13 +233,13 @@ static void amd_pstate_ut_check_freq(u32 index) } if (cpudata->boost_supported) { - if ((policy->max == cpudata->max_freq) || + if ((policy->max == policy->cpuinfo.max_freq) || (policy->max == cpudata->nominal_freq)) amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_PASS; else { amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL; pr_err("%s cpu%d policy_max=%d should be equal cpu_max=%d or cpu_nominal=%d !\n", - __func__, cpu, policy->max, cpudata->max_freq, + __func__, cpu, policy->max, policy->cpuinfo.max_freq, cpudata->nominal_freq); goto skip_test; } diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c index 1b98f5d76894d..fb28b27558882 100644 --- a/drivers/cpufreq/amd-pstate.c +++ b/drivers/cpufreq/amd-pstate.c @@ -717,7 +717,7 @@ static int amd_pstate_cpu_boost_update(struct cpufreq_policy *policy, bool on) int ret = 0; nominal_freq = READ_ONCE(cpudata->nominal_freq); - max_freq = READ_ONCE(cpudata->max_freq); + max_freq = perf_to_freq(cpudata, READ_ONCE(cpudata->highest_perf)); if (on) policy->cpuinfo.max_freq = max_freq; @@ -923,13 +923,10 @@ static int amd_pstate_init_freq(struct amd_cpudata *cpudata) nominal_freq *= 1000; WRITE_ONCE(cpudata->nominal_freq, nominal_freq); - WRITE_ONCE(cpudata->min_freq, min_freq); max_freq = perf_to_freq(cpudata, cpudata->highest_perf); lowest_nonlinear_freq = perf_to_freq(cpudata, cpudata->lowest_nonlinear_perf); - WRITE_ONCE(cpudata->lowest_nonlinear_freq, lowest_nonlinear_freq); - WRITE_ONCE(cpudata->max_freq, max_freq); /** * Below values need to be initialized correctly, otherwise driver will fail to load @@ -954,9 +951,9 @@ static int amd_pstate_init_freq(struct amd_cpudata *cpudata) static int amd_pstate_cpu_init(struct cpufreq_policy *policy) { - int min_freq, max_freq, ret; - struct device *dev; struct amd_cpudata *cpudata; + struct device *dev; + int ret; /* * Resetting PERF_CTL_MSR will put the CPU in P0 frequency, @@ -987,17 +984,11 @@ static int amd_pstate_cpu_init(struct cpufreq_policy *policy) if (ret) goto free_cpudata1; - min_freq = READ_ONCE(cpudata->min_freq); - max_freq = READ_ONCE(cpudata->max_freq); - policy->cpuinfo.transition_latency = amd_pstate_get_transition_latency(policy->cpu); policy->transition_delay_us = amd_pstate_get_transition_delay_us(policy->cpu); - policy->min = min_freq; - policy->max = max_freq; - - policy->cpuinfo.min_freq = min_freq; - policy->cpuinfo.max_freq = max_freq; + policy->cpuinfo.min_freq = policy->min = perf_to_freq(cpudata, cpudata->lowest_perf); + policy->cpuinfo.max_freq = policy->max = perf_to_freq(cpudata, cpudata->highest_perf); policy->boost_enabled = READ_ONCE(cpudata->boost_supported); @@ -1021,9 +1012,6 @@ static int amd_pstate_cpu_init(struct cpufreq_policy *policy) goto free_cpudata2; } - cpudata->max_limit_freq = max_freq; - cpudata->min_limit_freq = min_freq; - policy->driver_data = cpudata; if (!current_pstate_driver->adjust_perf) @@ -1081,14 +1069,10 @@ static int amd_pstate_cpu_suspend(struct cpufreq_policy *policy) static ssize_t show_amd_pstate_max_freq(struct cpufreq_policy *policy, char *buf) { - int max_freq; struct amd_cpudata *cpudata = policy->driver_data; - max_freq = READ_ONCE(cpudata->max_freq); - if (max_freq < 0) - return max_freq; - return sysfs_emit(buf, "%u\n", max_freq); + return sysfs_emit(buf, "%u\n", perf_to_freq(cpudata, READ_ONCE(cpudata->highest_perf))); } static ssize_t show_amd_pstate_lowest_nonlinear_freq(struct cpufreq_policy *policy, @@ -1446,10 +1430,10 @@ static bool amd_pstate_acpi_pm_profile_undefined(void) static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy) { - int min_freq, max_freq, ret; struct amd_cpudata *cpudata; struct device *dev; u64 value; + int ret; /* * Resetting PERF_CTL_MSR will put the CPU in P0 frequency, @@ -1480,19 +1464,13 @@ static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy) if (ret) goto free_cpudata1; - min_freq = READ_ONCE(cpudata->min_freq); - max_freq = READ_ONCE(cpudata->max_freq); - - policy->cpuinfo.min_freq = min_freq; - policy->cpuinfo.max_freq = max_freq; + policy->cpuinfo.min_freq = policy->min = perf_to_freq(cpudata, cpudata->lowest_perf); + policy->cpuinfo.max_freq = policy->max = perf_to_freq(cpudata, cpudata->highest_perf); /* It will be updated by governor */ policy->cur = policy->cpuinfo.min_freq; policy->driver_data = cpudata; - policy->min = policy->cpuinfo.min_freq; - policy->max = policy->cpuinfo.max_freq; - policy->boost_enabled = READ_ONCE(cpudata->boost_supported); /* @@ -1550,7 +1528,8 @@ static int amd_pstate_epp_update_limit(struct cpufreq_policy *policy) struct amd_cpudata *cpudata = policy->driver_data; u8 epp; - amd_pstate_update_min_max_limit(policy); + if (policy->min != cpudata->min_limit_freq || policy->max != cpudata->max_limit_freq) + amd_pstate_update_min_max_limit(policy); if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE) epp = 0; diff --git a/drivers/cpufreq/amd-pstate.h b/drivers/cpufreq/amd-pstate.h index 19d405c6d805e..0149933692458 100644 --- a/drivers/cpufreq/amd-pstate.h +++ b/drivers/cpufreq/amd-pstate.h @@ -46,8 +46,6 @@ struct amd_aperf_mperf { * @max_limit_perf: Cached value of the performance corresponding to policy->max * @min_limit_freq: Cached value of policy->min (in khz) * @max_limit_freq: Cached value of policy->max (in khz) - * @max_freq: the frequency (in khz) that mapped to highest_perf - * @min_freq: the frequency (in khz) that mapped to lowest_perf * @nominal_freq: the frequency (in khz) that mapped to nominal_perf * @lowest_nonlinear_freq: the frequency (in khz) that mapped to lowest_nonlinear_perf * @cur: Difference of Aperf/Mperf/tsc count between last and current sample @@ -77,11 +75,8 @@ struct amd_cpudata { u8 prefcore_ranking; u8 min_limit_perf; u8 max_limit_perf; - u32 min_limit_freq; - u32 max_limit_freq; - - u32 max_freq; - u32 min_freq; + u32 min_limit_freq; + u32 max_limit_freq; u32 nominal_freq; u32 lowest_nonlinear_freq; From patchwork Wed Feb 26 07:49:19 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Limonciello X-Patchwork-Id: 13991653 X-Patchwork-Delegate: mario.limonciello@amd.com Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EC96C26FDA9; Wed, 26 Feb 2025 07:49:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740556188; cv=none; b=jpwr/WInhCWt5ug/BCfYbeZkNg+p/jSBs7CFO4Yn4nAv2YGBJilG9TGYVTdxELBxQWooPD3X0NUU9Eluflzry+kDib5VQu6EyhBCixY3/pKEkb+MOnohijs1fWtEp+Iziop4tcstNpCNRoqmBG0+24DBaZhNR2K7j+WoScgL3Sc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740556188; c=relaxed/simple; bh=h78Ff6bcEkgsh6uZSEShBWTdWNLqpuXsX+Fu1wUedHs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=VSmLYbOKkPjYPPcWUzcp8Vl3HWRwYX9TwZm2H7eDdV8cyc5zYkf3V9qKlnaeltQGhpNYblYUm3LHJ5faBt+OtQ74CyKB6jDc5det1K/AGW7EGKIH8bYo0oCvAZAx7ghQIQsoq8dSkWoWLLAeL4MVz1ScxyxP/UTXfdvfAD5c3B4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=pSJeAkDe; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="pSJeAkDe" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 78848C4CEEC; Wed, 26 Feb 2025 07:49:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1740556187; bh=h78Ff6bcEkgsh6uZSEShBWTdWNLqpuXsX+Fu1wUedHs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pSJeAkDeUcROL++6WYqh5vCh1Tqc1Ia20KjokfBMs9N3Tqs41puUpIheyTw8WsNZ5 gBos1iXwXYZDy9x43T+DSgpRcnrUjVracsDpA+lWT7SBRob1PMYs7Mbdr/f2lm1uyf sOQEcWNsb67jojqLEjkZFbVx1ceDRFkda/x3H/ifS1vv6e6ycD+qdmOk0XKdMUSqSh gVjKdqNj8qkoIn1QsYdZA2PSj2YXI2+uQS4vpBzxd5fJQ0LxPwO0tugIcvXj7OI+uy QCBqp0sj6TCrzAQGK7JzLGhgjEd9voOC8mPuSP29Ss7M8XMVdFEpj98QFFfoQoqAK/ tN1RcI7gdLDog== From: Mario Limonciello To: "Gautham R . Shenoy" , Perry Yuan Cc: Dhananjay Ugwekar , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), linux-pm@vger.kernel.org (open list:CPU FREQUENCY SCALING FRAMEWORK), Mario Limonciello Subject: [PATCH v5 04/19] cpufreq/amd-pstate: Move perf values into a union Date: Wed, 26 Feb 2025 01:49:19 -0600 Message-ID: <20250226074934.1667721-5-superm1@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250226074934.1667721-1-superm1@kernel.org> References: <20250226074934.1667721-1-superm1@kernel.org> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Mario Limonciello By storing perf values in a union all the writes and reads can be done atomically, removing the need for some concurrency protections. While making this change, also drop the cached frequency values, using inline helpers to calculate them on demand from perf value. Reviewed-by: Gautham R. Shenoy Signed-off-by: Mario Limonciello Reviewed-by: Dhananjay Ugwekar --- v5: * Use suggestion for union docstring * Flush out for quirk case * Use perf variable in unit test v4: * Rebase on earlier changes v3: * Pick up tag v2: * cache perf variable in unit tests * Drop unnecessary check from amd_pstate_update_min_max_limit() * Consistency with READ_ONCE() * Drop unneeded policy checks * add kdoc --- drivers/cpufreq/amd-pstate-ut.c | 18 +-- drivers/cpufreq/amd-pstate.c | 205 ++++++++++++++++++-------------- drivers/cpufreq/amd-pstate.h | 51 +++++--- 3 files changed, 158 insertions(+), 116 deletions(-) diff --git a/drivers/cpufreq/amd-pstate-ut.c b/drivers/cpufreq/amd-pstate-ut.c index 445278cf40b61..5f6a92a816e61 100644 --- a/drivers/cpufreq/amd-pstate-ut.c +++ b/drivers/cpufreq/amd-pstate-ut.c @@ -129,6 +129,7 @@ static void amd_pstate_ut_check_perf(u32 index) struct cppc_perf_caps cppc_perf; struct cpufreq_policy *policy = NULL; struct amd_cpudata *cpudata = NULL; + union perf_cached cur_perf; for_each_possible_cpu(cpu) { policy = cpufreq_cpu_get(cpu); @@ -162,19 +163,20 @@ static void amd_pstate_ut_check_perf(u32 index) lowest_perf = AMD_CPPC_LOWEST_PERF(cap1); } - if (highest_perf != READ_ONCE(cpudata->highest_perf) && !cpudata->hw_prefcore) { + cur_perf = READ_ONCE(cpudata->perf); + if (highest_perf != cur_perf.highest_perf && !cpudata->hw_prefcore) { pr_err("%s cpu%d highest=%d %d highest perf doesn't match\n", - __func__, cpu, highest_perf, cpudata->highest_perf); + __func__, cpu, highest_perf, cur_perf.highest_perf); goto skip_test; } - if ((nominal_perf != READ_ONCE(cpudata->nominal_perf)) || - (lowest_nonlinear_perf != READ_ONCE(cpudata->lowest_nonlinear_perf)) || - (lowest_perf != READ_ONCE(cpudata->lowest_perf))) { + if (nominal_perf != cur_perf.nominal_perf || + (lowest_nonlinear_perf != cur_perf.lowest_nonlinear_perf) || + (lowest_perf != cur_perf.lowest_perf)) { amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL; pr_err("%s cpu%d nominal=%d %d lowest_nonlinear=%d %d lowest=%d %d, they should be equal!\n", - __func__, cpu, nominal_perf, cpudata->nominal_perf, - lowest_nonlinear_perf, cpudata->lowest_nonlinear_perf, - lowest_perf, cpudata->lowest_perf); + __func__, cpu, nominal_perf, cur_perf.nominal_perf, + lowest_nonlinear_perf, cur_perf.lowest_nonlinear_perf, + lowest_perf, cur_perf.lowest_perf); goto skip_test; } diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c index fb28b27558882..bd8bcda4e6eb0 100644 --- a/drivers/cpufreq/amd-pstate.c +++ b/drivers/cpufreq/amd-pstate.c @@ -142,18 +142,17 @@ static struct quirk_entry quirk_amd_7k62 = { .lowest_freq = 550, }; -static inline u8 freq_to_perf(struct amd_cpudata *cpudata, unsigned int freq_val) +static inline u8 freq_to_perf(union perf_cached perf, u32 nominal_freq, unsigned int freq_val) { - u32 perf_val = DIV_ROUND_UP_ULL((u64)freq_val * cpudata->nominal_perf, - cpudata->nominal_freq); + u32 perf_val = DIV_ROUND_UP_ULL((u64)freq_val * perf.nominal_perf, nominal_freq); - return (u8)clamp(perf_val, cpudata->lowest_perf, cpudata->highest_perf); + return (u8)clamp(perf_val, perf.lowest_perf, perf.highest_perf); } -static inline u32 perf_to_freq(struct amd_cpudata *cpudata, u8 perf_val) +static inline u32 perf_to_freq(union perf_cached perf, u32 nominal_freq, u8 perf_val) { - return DIV_ROUND_UP_ULL((u64)cpudata->nominal_freq * perf_val, - cpudata->nominal_perf); + return DIV_ROUND_UP_ULL((u64)nominal_freq * perf_val, + perf.nominal_perf); } static int __init dmi_matched_7k62_bios_bug(const struct dmi_system_id *dmi) @@ -347,7 +346,9 @@ static int amd_pstate_set_energy_pref_index(struct cpufreq_policy *policy, } if (trace_amd_pstate_epp_perf_enabled()) { - trace_amd_pstate_epp_perf(cpudata->cpu, cpudata->highest_perf, + union perf_cached perf = READ_ONCE(cpudata->perf); + + trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf, epp, FIELD_GET(AMD_CPPC_MIN_PERF_MASK, cpudata->cppc_req_cached), FIELD_GET(AMD_CPPC_MAX_PERF_MASK, cpudata->cppc_req_cached), @@ -425,6 +426,7 @@ static inline int amd_pstate_cppc_enable(bool enable) static int msr_init_perf(struct amd_cpudata *cpudata) { + union perf_cached perf = READ_ONCE(cpudata->perf); u64 cap1, numerator; int ret = rdmsrl_safe_on_cpu(cpudata->cpu, MSR_AMD_CPPC_CAP1, @@ -436,19 +438,21 @@ static int msr_init_perf(struct amd_cpudata *cpudata) if (ret) return ret; - WRITE_ONCE(cpudata->highest_perf, numerator); - WRITE_ONCE(cpudata->max_limit_perf, numerator); - WRITE_ONCE(cpudata->nominal_perf, AMD_CPPC_NOMINAL_PERF(cap1)); - WRITE_ONCE(cpudata->lowest_nonlinear_perf, AMD_CPPC_LOWNONLIN_PERF(cap1)); - WRITE_ONCE(cpudata->lowest_perf, AMD_CPPC_LOWEST_PERF(cap1)); + perf.highest_perf = numerator; + perf.max_limit_perf = numerator; + perf.min_limit_perf = AMD_CPPC_LOWEST_PERF(cap1); + perf.nominal_perf = AMD_CPPC_NOMINAL_PERF(cap1); + perf.lowest_nonlinear_perf = AMD_CPPC_LOWNONLIN_PERF(cap1); + perf.lowest_perf = AMD_CPPC_LOWEST_PERF(cap1); + WRITE_ONCE(cpudata->perf, perf); WRITE_ONCE(cpudata->prefcore_ranking, AMD_CPPC_HIGHEST_PERF(cap1)); - WRITE_ONCE(cpudata->min_limit_perf, AMD_CPPC_LOWEST_PERF(cap1)); return 0; } static int shmem_init_perf(struct amd_cpudata *cpudata) { struct cppc_perf_caps cppc_perf; + union perf_cached perf = READ_ONCE(cpudata->perf); u64 numerator; int ret = cppc_get_perf_caps(cpudata->cpu, &cppc_perf); @@ -459,14 +463,14 @@ static int shmem_init_perf(struct amd_cpudata *cpudata) if (ret) return ret; - WRITE_ONCE(cpudata->highest_perf, numerator); - WRITE_ONCE(cpudata->max_limit_perf, numerator); - WRITE_ONCE(cpudata->nominal_perf, cppc_perf.nominal_perf); - WRITE_ONCE(cpudata->lowest_nonlinear_perf, - cppc_perf.lowest_nonlinear_perf); - WRITE_ONCE(cpudata->lowest_perf, cppc_perf.lowest_perf); + perf.highest_perf = numerator; + perf.max_limit_perf = numerator; + perf.min_limit_perf = cppc_perf.lowest_perf; + perf.nominal_perf = cppc_perf.nominal_perf; + perf.lowest_nonlinear_perf = cppc_perf.lowest_nonlinear_perf; + perf.lowest_perf = cppc_perf.lowest_perf; + WRITE_ONCE(cpudata->perf, perf); WRITE_ONCE(cpudata->prefcore_ranking, cppc_perf.highest_perf); - WRITE_ONCE(cpudata->min_limit_perf, cppc_perf.lowest_perf); if (cppc_state == AMD_PSTATE_ACTIVE) return 0; @@ -549,14 +553,14 @@ static void amd_pstate_update(struct amd_cpudata *cpudata, u8 min_perf, u8 des_perf, u8 max_perf, bool fast_switch, int gov_flags) { struct cpufreq_policy *policy __free(put_cpufreq_policy) = cpufreq_cpu_get(cpudata->cpu); - u8 nominal_perf = READ_ONCE(cpudata->nominal_perf); + union perf_cached perf = READ_ONCE(cpudata->perf); if (!policy) return; des_perf = clamp_t(u8, des_perf, min_perf, max_perf); - policy->cur = perf_to_freq(cpudata, des_perf); + policy->cur = perf_to_freq(perf, cpudata->nominal_freq, des_perf); if ((cppc_state == AMD_PSTATE_GUIDED) && (gov_flags & CPUFREQ_GOV_DYNAMIC_SWITCHING)) { min_perf = des_perf; @@ -565,7 +569,7 @@ static void amd_pstate_update(struct amd_cpudata *cpudata, u8 min_perf, /* limit the max perf when core performance boost feature is disabled */ if (!cpudata->boost_supported) - max_perf = min_t(u8, nominal_perf, max_perf); + max_perf = min_t(u8, perf.nominal_perf, max_perf); if (trace_amd_pstate_perf_enabled() && amd_pstate_sample(cpudata)) { trace_amd_pstate_perf(min_perf, des_perf, max_perf, cpudata->freq, @@ -602,39 +606,41 @@ static int amd_pstate_verify(struct cpufreq_policy_data *policy_data) return 0; } -static int amd_pstate_update_min_max_limit(struct cpufreq_policy *policy) +static void amd_pstate_update_min_max_limit(struct cpufreq_policy *policy) { - u8 max_limit_perf, min_limit_perf; struct amd_cpudata *cpudata = policy->driver_data; + union perf_cached perf = READ_ONCE(cpudata->perf); - max_limit_perf = freq_to_perf(cpudata, policy->max); - min_limit_perf = freq_to_perf(cpudata, policy->min); + perf.max_limit_perf = freq_to_perf(perf, cpudata->nominal_freq, policy->max); + perf.min_limit_perf = freq_to_perf(perf, cpudata->nominal_freq, policy->min); if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE) - min_limit_perf = min(cpudata->nominal_perf, max_limit_perf); + perf.min_limit_perf = min(perf.nominal_perf, perf.max_limit_perf); - WRITE_ONCE(cpudata->max_limit_perf, max_limit_perf); - WRITE_ONCE(cpudata->min_limit_perf, min_limit_perf); WRITE_ONCE(cpudata->max_limit_freq, policy->max); WRITE_ONCE(cpudata->min_limit_freq, policy->min); - - return 0; + WRITE_ONCE(cpudata->perf, perf); } static int amd_pstate_update_freq(struct cpufreq_policy *policy, unsigned int target_freq, bool fast_switch) { struct cpufreq_freqs freqs; - struct amd_cpudata *cpudata = policy->driver_data; + struct amd_cpudata *cpudata; + union perf_cached perf; u8 des_perf; + cpudata = policy->driver_data; + if (policy->min != cpudata->min_limit_freq || policy->max != cpudata->max_limit_freq) amd_pstate_update_min_max_limit(policy); + perf = READ_ONCE(cpudata->perf); + freqs.old = policy->cur; freqs.new = target_freq; - des_perf = freq_to_perf(cpudata, target_freq); + des_perf = freq_to_perf(perf, cpudata->nominal_freq, target_freq); WARN_ON(fast_switch && !policy->fast_switch_enabled); /* @@ -645,8 +651,8 @@ static int amd_pstate_update_freq(struct cpufreq_policy *policy, if (!fast_switch) cpufreq_freq_transition_begin(policy, &freqs); - amd_pstate_update(cpudata, cpudata->min_limit_perf, des_perf, - cpudata->max_limit_perf, fast_switch, + amd_pstate_update(cpudata, perf.min_limit_perf, des_perf, + perf.max_limit_perf, fast_switch, policy->governor->flags); if (!fast_switch) @@ -675,9 +681,10 @@ static void amd_pstate_adjust_perf(unsigned int cpu, unsigned long target_perf, unsigned long capacity) { - u8 max_perf, min_perf, des_perf, cap_perf, min_limit_perf; + u8 max_perf, min_perf, des_perf, cap_perf; struct cpufreq_policy *policy __free(put_cpufreq_policy) = cpufreq_cpu_get(cpu); struct amd_cpudata *cpudata; + union perf_cached perf; if (!policy) return; @@ -687,8 +694,8 @@ static void amd_pstate_adjust_perf(unsigned int cpu, if (policy->min != cpudata->min_limit_freq || policy->max != cpudata->max_limit_freq) amd_pstate_update_min_max_limit(policy); - cap_perf = READ_ONCE(cpudata->highest_perf); - min_limit_perf = READ_ONCE(cpudata->min_limit_perf); + perf = READ_ONCE(cpudata->perf); + cap_perf = perf.highest_perf; des_perf = cap_perf; if (target_perf < capacity) @@ -699,10 +706,10 @@ static void amd_pstate_adjust_perf(unsigned int cpu, else min_perf = cap_perf; - if (min_perf < min_limit_perf) - min_perf = min_limit_perf; + if (min_perf < perf.min_limit_perf) + min_perf = perf.min_limit_perf; - max_perf = cpudata->max_limit_perf; + max_perf = perf.max_limit_perf; if (max_perf < min_perf) max_perf = min_perf; @@ -713,11 +720,12 @@ static void amd_pstate_adjust_perf(unsigned int cpu, static int amd_pstate_cpu_boost_update(struct cpufreq_policy *policy, bool on) { struct amd_cpudata *cpudata = policy->driver_data; + union perf_cached perf = READ_ONCE(cpudata->perf); u32 nominal_freq, max_freq; int ret = 0; nominal_freq = READ_ONCE(cpudata->nominal_freq); - max_freq = perf_to_freq(cpudata, READ_ONCE(cpudata->highest_perf)); + max_freq = perf_to_freq(perf, cpudata->nominal_freq, perf.highest_perf); if (on) policy->cpuinfo.max_freq = max_freq; @@ -888,30 +896,30 @@ static u32 amd_pstate_get_transition_latency(unsigned int cpu) } /* - * amd_pstate_init_freq: Initialize the max_freq, min_freq, - * nominal_freq and lowest_nonlinear_freq for - * the @cpudata object. + * amd_pstate_init_freq: Initialize the nominal_freq and lowest_nonlinear_freq + * for the @cpudata object. * - * Requires: highest_perf, lowest_perf, nominal_perf and - * lowest_nonlinear_perf members of @cpudata to be - * initialized. + * Requires: all perf members of @cpudata to be initialized. * - * Returns 0 on success, non-zero value on failure. + * Returns 0 on success, non-zero value on failure. */ static int amd_pstate_init_freq(struct amd_cpudata *cpudata) { - int ret; - u32 min_freq, max_freq; - u32 nominal_freq, lowest_nonlinear_freq; + u32 min_freq, max_freq, nominal_freq, lowest_nonlinear_freq; struct cppc_perf_caps cppc_perf; + union perf_cached perf; + int ret; ret = cppc_get_perf_caps(cpudata->cpu, &cppc_perf); if (ret) return ret; + perf = READ_ONCE(cpudata->perf); - if (quirks && quirks->lowest_freq) + if (quirks && quirks->lowest_freq) { min_freq = quirks->lowest_freq; - else + perf.lowest_perf = freq_to_perf(perf, nominal_freq, min_freq); + WRITE_ONCE(cpudata->perf, perf); + } else min_freq = cppc_perf.lowest_freq; if (quirks && quirks->nominal_freq) @@ -924,8 +932,8 @@ static int amd_pstate_init_freq(struct amd_cpudata *cpudata) WRITE_ONCE(cpudata->nominal_freq, nominal_freq); - max_freq = perf_to_freq(cpudata, cpudata->highest_perf); - lowest_nonlinear_freq = perf_to_freq(cpudata, cpudata->lowest_nonlinear_perf); + max_freq = perf_to_freq(perf, nominal_freq, perf.highest_perf); + lowest_nonlinear_freq = perf_to_freq(perf, nominal_freq, perf.lowest_nonlinear_perf); WRITE_ONCE(cpudata->lowest_nonlinear_freq, lowest_nonlinear_freq); /** @@ -952,6 +960,7 @@ static int amd_pstate_init_freq(struct amd_cpudata *cpudata) static int amd_pstate_cpu_init(struct cpufreq_policy *policy) { struct amd_cpudata *cpudata; + union perf_cached perf; struct device *dev; int ret; @@ -987,8 +996,14 @@ static int amd_pstate_cpu_init(struct cpufreq_policy *policy) policy->cpuinfo.transition_latency = amd_pstate_get_transition_latency(policy->cpu); policy->transition_delay_us = amd_pstate_get_transition_delay_us(policy->cpu); - policy->cpuinfo.min_freq = policy->min = perf_to_freq(cpudata, cpudata->lowest_perf); - policy->cpuinfo.max_freq = policy->max = perf_to_freq(cpudata, cpudata->highest_perf); + perf = READ_ONCE(cpudata->perf); + + policy->cpuinfo.min_freq = policy->min = perf_to_freq(perf, + cpudata->nominal_freq, + perf.lowest_perf); + policy->cpuinfo.max_freq = policy->max = perf_to_freq(perf, + cpudata->nominal_freq, + perf.highest_perf); policy->boost_enabled = READ_ONCE(cpudata->boost_supported); @@ -1069,23 +1084,27 @@ static int amd_pstate_cpu_suspend(struct cpufreq_policy *policy) static ssize_t show_amd_pstate_max_freq(struct cpufreq_policy *policy, char *buf) { - struct amd_cpudata *cpudata = policy->driver_data; + struct amd_cpudata *cpudata; + union perf_cached perf; + cpudata = policy->driver_data; + perf = READ_ONCE(cpudata->perf); - return sysfs_emit(buf, "%u\n", perf_to_freq(cpudata, READ_ONCE(cpudata->highest_perf))); + return sysfs_emit(buf, "%u\n", + perf_to_freq(perf, cpudata->nominal_freq, perf.highest_perf)); } static ssize_t show_amd_pstate_lowest_nonlinear_freq(struct cpufreq_policy *policy, char *buf) { - int freq; - struct amd_cpudata *cpudata = policy->driver_data; + struct amd_cpudata *cpudata; + union perf_cached perf; - freq = READ_ONCE(cpudata->lowest_nonlinear_freq); - if (freq < 0) - return freq; + cpudata = policy->driver_data; + perf = READ_ONCE(cpudata->perf); - return sysfs_emit(buf, "%u\n", freq); + return sysfs_emit(buf, "%u\n", + perf_to_freq(perf, cpudata->nominal_freq, perf.lowest_nonlinear_perf)); } /* @@ -1095,12 +1114,11 @@ static ssize_t show_amd_pstate_lowest_nonlinear_freq(struct cpufreq_policy *poli static ssize_t show_amd_pstate_highest_perf(struct cpufreq_policy *policy, char *buf) { - u8 perf; - struct amd_cpudata *cpudata = policy->driver_data; + struct amd_cpudata *cpudata; - perf = READ_ONCE(cpudata->highest_perf); + cpudata = policy->driver_data; - return sysfs_emit(buf, "%u\n", perf); + return sysfs_emit(buf, "%u\n", cpudata->perf.highest_perf); } static ssize_t show_amd_pstate_prefcore_ranking(struct cpufreq_policy *policy, @@ -1431,6 +1449,7 @@ static bool amd_pstate_acpi_pm_profile_undefined(void) static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy) { struct amd_cpudata *cpudata; + union perf_cached perf; struct device *dev; u64 value; int ret; @@ -1464,8 +1483,15 @@ static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy) if (ret) goto free_cpudata1; - policy->cpuinfo.min_freq = policy->min = perf_to_freq(cpudata, cpudata->lowest_perf); - policy->cpuinfo.max_freq = policy->max = perf_to_freq(cpudata, cpudata->highest_perf); + perf = READ_ONCE(cpudata->perf); + + policy->cpuinfo.min_freq = policy->min = perf_to_freq(perf, + cpudata->nominal_freq, + perf.lowest_perf); + policy->cpuinfo.max_freq = policy->max = perf_to_freq(perf, + cpudata->nominal_freq, + perf.highest_perf); + /* It will be updated by governor */ policy->cur = policy->cpuinfo.min_freq; @@ -1526,6 +1552,7 @@ static void amd_pstate_epp_cpu_exit(struct cpufreq_policy *policy) static int amd_pstate_epp_update_limit(struct cpufreq_policy *policy) { struct amd_cpudata *cpudata = policy->driver_data; + union perf_cached perf; u8 epp; if (policy->min != cpudata->min_limit_freq || policy->max != cpudata->max_limit_freq) @@ -1536,15 +1563,16 @@ static int amd_pstate_epp_update_limit(struct cpufreq_policy *policy) else epp = READ_ONCE(cpudata->epp_cached); + perf = READ_ONCE(cpudata->perf); if (trace_amd_pstate_epp_perf_enabled()) { - trace_amd_pstate_epp_perf(cpudata->cpu, cpudata->highest_perf, epp, - cpudata->min_limit_perf, - cpudata->max_limit_perf, + trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf, epp, + perf.min_limit_perf, + perf.max_limit_perf, policy->boost_enabled); } - return amd_pstate_update_perf(cpudata, cpudata->min_limit_perf, 0U, - cpudata->max_limit_perf, epp, false); + return amd_pstate_update_perf(cpudata, perf.min_limit_perf, 0U, + perf.max_limit_perf, epp, false); } static int amd_pstate_epp_set_policy(struct cpufreq_policy *policy) @@ -1576,20 +1604,18 @@ static int amd_pstate_epp_set_policy(struct cpufreq_policy *policy) static int amd_pstate_epp_reenable(struct cpufreq_policy *policy) { struct amd_cpudata *cpudata = policy->driver_data; - u8 max_perf; + union perf_cached perf = READ_ONCE(cpudata->perf); int ret; ret = amd_pstate_cppc_enable(true); if (ret) pr_err("failed to enable amd pstate during resume, return %d\n", ret); - max_perf = READ_ONCE(cpudata->highest_perf); - if (trace_amd_pstate_epp_perf_enabled()) { - trace_amd_pstate_epp_perf(cpudata->cpu, cpudata->highest_perf, + trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf, cpudata->epp_cached, FIELD_GET(AMD_CPPC_MIN_PERF_MASK, cpudata->cppc_req_cached), - max_perf, policy->boost_enabled); + perf.highest_perf, policy->boost_enabled); } return amd_pstate_epp_update_limit(policy); @@ -1613,22 +1639,21 @@ static int amd_pstate_epp_cpu_online(struct cpufreq_policy *policy) static int amd_pstate_epp_cpu_offline(struct cpufreq_policy *policy) { struct amd_cpudata *cpudata = policy->driver_data; - u8 min_perf; + union perf_cached perf = READ_ONCE(cpudata->perf); if (cpudata->suspended) return 0; - min_perf = READ_ONCE(cpudata->lowest_perf); - guard(mutex)(&amd_pstate_limits_lock); if (trace_amd_pstate_epp_perf_enabled()) { - trace_amd_pstate_epp_perf(cpudata->cpu, cpudata->highest_perf, + trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf, AMD_CPPC_EPP_BALANCE_POWERSAVE, - min_perf, min_perf, policy->boost_enabled); + perf.lowest_perf, perf.lowest_perf, + policy->boost_enabled); } - return amd_pstate_update_perf(cpudata, min_perf, 0, min_perf, + return amd_pstate_update_perf(cpudata, perf.lowest_perf, 0, perf.lowest_perf, AMD_CPPC_EPP_BALANCE_POWERSAVE, false); } diff --git a/drivers/cpufreq/amd-pstate.h b/drivers/cpufreq/amd-pstate.h index 0149933692458..83532a0079a81 100644 --- a/drivers/cpufreq/amd-pstate.h +++ b/drivers/cpufreq/amd-pstate.h @@ -13,6 +13,36 @@ /********************************************************************* * AMD P-state INTERFACE * *********************************************************************/ + +/** + * union perf_cached - A union to cache performance-related data. + * @highest_perf: the maximum performance an individual processor may reach, + * assuming ideal conditions + * For platforms that support the preferred core feature, the highest_perf value maybe + * configured to any value in the range 166-255 by the firmware (because the preferred + * core ranking is encoded in the highest_perf value). To maintain consistency across + * all platforms, we split the highest_perf and preferred core ranking values into + * cpudata->perf.highest_perf and cpudata->prefcore_ranking. + * @nominal_perf: the maximum sustained performance level of the processor, + * assuming ideal operating conditions + * @lowest_nonlinear_perf: the lowest performance level at which nonlinear power + * savings are achieved + * @lowest_perf: the absolute lowest performance level of the processor + * @min_limit_perf: Cached value of the performance corresponding to policy->min + * @max_limit_perf: Cached value of the performance corresponding to policy->max + */ +union perf_cached { + struct { + u8 highest_perf; + u8 nominal_perf; + u8 lowest_nonlinear_perf; + u8 lowest_perf; + u8 min_limit_perf; + u8 max_limit_perf; + }; + u64 val; +}; + /** * struct amd_aperf_mperf * @aperf: actual performance frequency clock count @@ -30,20 +60,9 @@ struct amd_aperf_mperf { * @cpu: CPU number * @req: constraint request to apply * @cppc_req_cached: cached performance request hints - * @highest_perf: the maximum performance an individual processor may reach, - * assuming ideal conditions - * For platforms that do not support the preferred core feature, the - * highest_pef may be configured with 166 or 255, to avoid max frequency - * calculated wrongly. we take the fixed value as the highest_perf. - * @nominal_perf: the maximum sustained performance level of the processor, - * assuming ideal operating conditions - * @lowest_nonlinear_perf: the lowest performance level at which nonlinear power - * savings are achieved - * @lowest_perf: the absolute lowest performance level of the processor + * @perf: cached performance-related data * @prefcore_ranking: the preferred core ranking, the higher value indicates a higher * priority. - * @min_limit_perf: Cached value of the performance corresponding to policy->min - * @max_limit_perf: Cached value of the performance corresponding to policy->max * @min_limit_freq: Cached value of policy->min (in khz) * @max_limit_freq: Cached value of policy->max (in khz) * @nominal_freq: the frequency (in khz) that mapped to nominal_perf @@ -68,13 +87,9 @@ struct amd_cpudata { struct freq_qos_request req[2]; u64 cppc_req_cached; - u8 highest_perf; - u8 nominal_perf; - u8 lowest_nonlinear_perf; - u8 lowest_perf; + union perf_cached perf; + u8 prefcore_ranking; - u8 min_limit_perf; - u8 max_limit_perf; u32 min_limit_freq; u32 max_limit_freq; u32 nominal_freq; From patchwork Wed Feb 26 07:49:20 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Limonciello X-Patchwork-Id: 13991654 X-Patchwork-Delegate: mario.limonciello@amd.com Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 57877270EDD; Wed, 26 Feb 2025 07:49:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740556189; cv=none; b=KbA0+2eSigVEUcRjW//a9TgSPVuZymj8hpPzXXV4QInRbeD/z/JWeqXG3IrhsXcjswqY0Bn50s7EuxCArhUog0/6a58JczJfWhDmrpGICq9mKDBWS/REc7W+QIGcA1heV7spJC0APnAsIV/eA7JU1KTW1c0HE2/+q8PrhFsVz9w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740556189; c=relaxed/simple; bh=RhhlMY/+/0QCcNeMO/gDSOU04nQVsd/ZzzYkWaGrJqI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=fmrp+udgx/4Tr35W1o4l0QM/6J0QriAs4MOEANJ+17UrouMeIwbM78UIWlKDWZTa6fWUg1zDsTy3clcA5cog8dKnUhCaNAerihflCXyRuSKB4FIfw8ueCjafYhLEB7MxwqN8fiy4kq4387TYRL1IhITZbimdcTPP8A9/Kewh34s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=QYEwJYWZ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="QYEwJYWZ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id BF392C4CEE7; Wed, 26 Feb 2025 07:49:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1740556188; bh=RhhlMY/+/0QCcNeMO/gDSOU04nQVsd/ZzzYkWaGrJqI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QYEwJYWZHgp5B75oS7SQ3F5hLC6rYkWE9MFS1qvx1qxxqU+ANkW1Hd4s2nNgxY7ME 0baEBJmV+YAshlX/ecAruKnnyyDePOtM5g+2Me3Lpe+pxkmQiNmQs0SlcneHD0gUiT dtT6JbeDanB+eZ7oBpNucxjYXGr8rDF/fw1gWi2UkszkIRqC70hZrihGtdU0HjHw5P kQMIS0LJUda5Z2ApZOg+yr+uhkt7bJiM0Bz3Vz0PWrPYSXnQTq6LJoxECbMbLwERMR ESvs/hKi8rbujQoeC3ap8UKCsfr+6pz4QHnEtNQf719oD3pE1DU3scrJHJxNJTwNpJ 2p/8R6td/WCUg== From: Mario Limonciello To: "Gautham R . Shenoy" , Perry Yuan Cc: Dhananjay Ugwekar , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), linux-pm@vger.kernel.org (open list:CPU FREQUENCY SCALING FRAMEWORK), Mario Limonciello , Dhananjay Ugwekar Subject: [PATCH v5 05/19] cpufreq/amd-pstate: Overhaul locking Date: Wed, 26 Feb 2025 01:49:20 -0600 Message-ID: <20250226074934.1667721-6-superm1@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250226074934.1667721-1-superm1@kernel.org> References: <20250226074934.1667721-1-superm1@kernel.org> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Mario Limonciello amd_pstate_cpu_boost_update() and refresh_frequency_limits() both update the policy state and have nothing to do with the amd-pstate driver itself. A global "limits" lock doesn't make sense because each CPU can have policies changed independently. Each time a CPU changes values they will atomically be written to the per-CPU perf member. Drop per CPU locking cases. The remaining "global" driver lock is used to ensure that only one entity can change driver modes at a given time. Reviewed-by: Gautham R. Shenoy Reviewed-by: Dhananjay Ugwekar Signed-off-by: Mario Limonciello --- v5: * Add tag --- drivers/cpufreq/amd-pstate.c | 13 +++---------- 1 file changed, 3 insertions(+), 10 deletions(-) diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c index bd8bcda4e6eb0..95b77cf145174 100644 --- a/drivers/cpufreq/amd-pstate.c +++ b/drivers/cpufreq/amd-pstate.c @@ -196,7 +196,6 @@ static inline int get_mode_idx_from_str(const char *str, size_t size) return -EINVAL; } -static DEFINE_MUTEX(amd_pstate_limits_lock); static DEFINE_MUTEX(amd_pstate_driver_lock); static u8 msr_get_epp(struct amd_cpudata *cpudata) @@ -752,7 +751,6 @@ static int amd_pstate_set_boost(struct cpufreq_policy *policy, int state) pr_err("Boost mode is not supported by this processor or SBIOS\n"); return -EOPNOTSUPP; } - guard(mutex)(&amd_pstate_driver_lock); ret = amd_pstate_cpu_boost_update(policy, state); refresh_frequency_limits(policy); @@ -1176,8 +1174,6 @@ static ssize_t store_energy_performance_preference( if (ret < 0) return -EINVAL; - guard(mutex)(&amd_pstate_limits_lock); - ret = amd_pstate_set_energy_pref_index(policy, ret); return ret ? ret : count; @@ -1350,8 +1346,10 @@ int amd_pstate_update_status(const char *buf, size_t size) if (mode_idx < 0 || mode_idx >= AMD_PSTATE_MAX) return -EINVAL; - if (mode_state_machine[cppc_state][mode_idx]) + if (mode_state_machine[cppc_state][mode_idx]) { + guard(mutex)(&amd_pstate_driver_lock); return mode_state_machine[cppc_state][mode_idx](mode_idx); + } return 0; } @@ -1372,7 +1370,6 @@ static ssize_t status_store(struct device *a, struct device_attribute *b, char *p = memchr(buf, '\n', count); int ret; - guard(mutex)(&amd_pstate_driver_lock); ret = amd_pstate_update_status(buf, p ? p - buf : count); return ret < 0 ? ret : count; @@ -1644,8 +1641,6 @@ static int amd_pstate_epp_cpu_offline(struct cpufreq_policy *policy) if (cpudata->suspended) return 0; - guard(mutex)(&amd_pstate_limits_lock); - if (trace_amd_pstate_epp_perf_enabled()) { trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf, AMD_CPPC_EPP_BALANCE_POWERSAVE, @@ -1685,8 +1680,6 @@ static int amd_pstate_epp_resume(struct cpufreq_policy *policy) struct amd_cpudata *cpudata = policy->driver_data; if (cpudata->suspended) { - guard(mutex)(&amd_pstate_limits_lock); - /* enable amd pstate from suspend state*/ amd_pstate_epp_reenable(policy); From patchwork Wed Feb 26 07:49:21 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Limonciello X-Patchwork-Id: 13991655 X-Patchwork-Delegate: mario.limonciello@amd.com Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 59C2527127A; Wed, 26 Feb 2025 07:49:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740556190; cv=none; b=PxR6owdp1EI0Rd/zQaFCsu23IvQbS3mgSQRgbb0uN3cc1VLl0tw8MlGH53cgzrn4ol2hrZtqw22BF0hrK+IsHvozOed98qfG9Kmap3HJYgk9LCH8pI5fgTL+3dNdb8SWo0guM2QI8hsxbb6YXMCtbpWBFZNNG1byiuPEAGj418Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740556190; c=relaxed/simple; bh=7948qPgNwS4plElTkkI+DCDSLW+DMO0jgom1BwBpJkk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=cnlssKbVDGw8iSTnQ0+GHfotJ0C8uspaUBxK9u5dHnmPSe0Vjh/bIFCrcfyuQIb2HaZnX/lU3rYzhqXS4yI5BFGVsMYgSwYNCMuFHxSom+iAbNEkBZLOQfspMk96IsMFUgE2tTNv+pz52lqWtJFknUcmWzVasaSCi7In1Ouzmdg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ganH3Gkt; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ganH3Gkt" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2F6C4C4CEE2; Wed, 26 Feb 2025 07:49:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1740556190; bh=7948qPgNwS4plElTkkI+DCDSLW+DMO0jgom1BwBpJkk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ganH3GktM0umsFArds7uYH2IxjMIU4IDUKaDMBTWJNs52jdcJfc/V8dVrQgM+K17s ypjir1BAC2V5LVIG+FxV+yu+LEaWJXac0kuh8Y9kTo5lEfWTaFPRAtpS7fslNnYh3Y BslArcAEbT9AO1ffmvW7CSzJORc2MTlu0ejdwXe846HvGRTcb4Tr+EGwx6kUqQmjCl NsVMPgGaRCw2eNz3Z0BLPT+m/BM7XdSoED/DSuZMuztjfqgqQR8JVS4028qry7MAll objfGK6/DroVerSynpa+ozG8skqKCVGTvwgmXxycJfpfKUtvV1p38nHVVBInpJECW+ CrecqetJDDEzA== From: Mario Limonciello To: "Gautham R . Shenoy" , Perry Yuan Cc: Dhananjay Ugwekar , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), linux-pm@vger.kernel.org (open list:CPU FREQUENCY SCALING FRAMEWORK), Mario Limonciello , Dhananjay Ugwekar Subject: [PATCH v5 06/19] cpufreq/amd-pstate: Drop `cppc_cap1_cached` Date: Wed, 26 Feb 2025 01:49:21 -0600 Message-ID: <20250226074934.1667721-7-superm1@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250226074934.1667721-1-superm1@kernel.org> References: <20250226074934.1667721-1-superm1@kernel.org> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Mario Limonciello The `cppc_cap1_cached` variable isn't used at all, there is no need to read it at initialization for each CPU. Reviewed-by: Gautham R. Shenoy Reviewed-by: Dhananjay Ugwekar Signed-off-by: Mario Limonciello --- drivers/cpufreq/amd-pstate.c | 5 ----- drivers/cpufreq/amd-pstate.h | 2 -- 2 files changed, 7 deletions(-) diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c index 95b77cf145174..2cec8d7d92f51 100644 --- a/drivers/cpufreq/amd-pstate.c +++ b/drivers/cpufreq/amd-pstate.c @@ -1514,11 +1514,6 @@ static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy) if (ret) return ret; WRITE_ONCE(cpudata->cppc_req_cached, value); - - ret = rdmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_CAP1, &value); - if (ret) - return ret; - WRITE_ONCE(cpudata->cppc_cap1_cached, value); } ret = amd_pstate_set_epp(cpudata, cpudata->epp_default); if (ret) diff --git a/drivers/cpufreq/amd-pstate.h b/drivers/cpufreq/amd-pstate.h index 83532a0079a81..1557e1afea79c 100644 --- a/drivers/cpufreq/amd-pstate.h +++ b/drivers/cpufreq/amd-pstate.h @@ -76,7 +76,6 @@ struct amd_aperf_mperf { * AMD P-State driver supports preferred core featue. * @epp_cached: Cached CPPC energy-performance preference value * @policy: Cpufreq policy value - * @cppc_cap1_cached Cached MSR_AMD_CPPC_CAP1 register value * * The amd_cpudata is key private data for each CPU thread in AMD P-State, and * represents all the attributes and goals that AMD P-State requests at runtime. @@ -105,7 +104,6 @@ struct amd_cpudata { /* EPP feature related attributes*/ u8 epp_cached; u32 policy; - u64 cppc_cap1_cached; bool suspended; u8 epp_default; }; From patchwork Wed Feb 26 07:49:22 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Limonciello X-Patchwork-Id: 13991656 X-Patchwork-Delegate: mario.limonciello@amd.com Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BE5C227180F; Wed, 26 Feb 2025 07:49:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740556191; cv=none; b=B3eoLItJYeNCCvk4TkUPbVmvI9l/L8jjfmdm6+eDTuRBTxezMalFSW+rSbHuVM5gea5ww2deNDGM+RKY+jQLj3oisQMm5FIlvKrgSZryZjdCY8trLEFVNKC6w8YnKzNWS7ZcSO7FT2zUxfS111nTsE712HxgtvgcAzmv7xFp0wo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740556191; c=relaxed/simple; bh=OQYNxtGz0wJeCEblJ/slox/YGV07cQK/uO8CO726WiU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ZF0gc3ogecUxRyloBL9aB/HbvSI5T7KVwd8/p5XKDVlrurhp5EDLrP/DevFgOcr1dzbNq8r/GggoEhZy2wc/IeMgXFvk7j6sJBsHqWLEo133Q1MJIl43UxuQdkEFkaVO0r4kfLQkZElsl8ExkdW2XM3PLrrqeV+HaTDxgNUQkAI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=O/cyR0CJ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="O/cyR0CJ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8EE66C4CED6; Wed, 26 Feb 2025 07:49:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1740556191; bh=OQYNxtGz0wJeCEblJ/slox/YGV07cQK/uO8CO726WiU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=O/cyR0CJo/zwPWFWzvPgzIBhCtS7nsvhbY4l5WrilMujqXg8DWNgQTmqIvRM6mda5 dSOB+zOHgf/E8VIteRYPHk8WyrHAhOadHRNr26rGPOcei0xCKbQQb7CniCp2ZFU4rI n/KP8ax/vvD+/uy41hGH+Eob0sIGSDlizhd36guzmAKndRL6L94qZq8WNv7EBv0Fht 7/jEobIU/kbf1ZBHsodXI6GyNoV95tyxMUHTj9mCserAu7SkPg/wOhcMVnim17anNO i4bTwcV9ofWFxyDcfw2CMgG7KdnVHz/lz45IXC2XhW3rqD2G7NVJhn+PK3ajf84749 i6lGhdYe/bI3Q== From: Mario Limonciello To: "Gautham R . Shenoy" , Perry Yuan Cc: Dhananjay Ugwekar , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), linux-pm@vger.kernel.org (open list:CPU FREQUENCY SCALING FRAMEWORK), Mario Limonciello , Dhananjay Ugwekar Subject: [PATCH v5 07/19] cpufreq/amd-pstate-ut: Use _free macro to free put policy Date: Wed, 26 Feb 2025 01:49:22 -0600 Message-ID: <20250226074934.1667721-8-superm1@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250226074934.1667721-1-superm1@kernel.org> References: <20250226074934.1667721-1-superm1@kernel.org> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Mario Limonciello Using a scoped cleanup macro simplifies cleanup code. Reviewed-by: Dhananjay Ugwekar Reviewed-by: Gautham R. Shenoy Signed-off-by: Mario Limonciello --- drivers/cpufreq/amd-pstate-ut.c | 33 ++++++++++++++------------------- 1 file changed, 14 insertions(+), 19 deletions(-) diff --git a/drivers/cpufreq/amd-pstate-ut.c b/drivers/cpufreq/amd-pstate-ut.c index 5f6a92a816e61..e02672e67380a 100644 --- a/drivers/cpufreq/amd-pstate-ut.c +++ b/drivers/cpufreq/amd-pstate-ut.c @@ -26,6 +26,7 @@ #include #include #include +#include #include @@ -127,11 +128,12 @@ static void amd_pstate_ut_check_perf(u32 index) u32 highest_perf = 0, nominal_perf = 0, lowest_nonlinear_perf = 0, lowest_perf = 0; u64 cap1 = 0; struct cppc_perf_caps cppc_perf; - struct cpufreq_policy *policy = NULL; struct amd_cpudata *cpudata = NULL; union perf_cached cur_perf; for_each_possible_cpu(cpu) { + struct cpufreq_policy *policy __free(put_cpufreq_policy) = NULL; + policy = cpufreq_cpu_get(cpu); if (!policy) break; @@ -142,7 +144,7 @@ static void amd_pstate_ut_check_perf(u32 index) if (ret) { amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL; pr_err("%s cppc_get_perf_caps ret=%d error!\n", __func__, ret); - goto skip_test; + return; } highest_perf = cppc_perf.highest_perf; @@ -154,7 +156,7 @@ static void amd_pstate_ut_check_perf(u32 index) if (ret) { amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL; pr_err("%s read CPPC_CAP1 ret=%d error!\n", __func__, ret); - goto skip_test; + return; } highest_perf = AMD_CPPC_HIGHEST_PERF(cap1); @@ -167,7 +169,7 @@ static void amd_pstate_ut_check_perf(u32 index) if (highest_perf != cur_perf.highest_perf && !cpudata->hw_prefcore) { pr_err("%s cpu%d highest=%d %d highest perf doesn't match\n", __func__, cpu, highest_perf, cur_perf.highest_perf); - goto skip_test; + return; } if (nominal_perf != cur_perf.nominal_perf || (lowest_nonlinear_perf != cur_perf.lowest_nonlinear_perf) || @@ -177,7 +179,7 @@ static void amd_pstate_ut_check_perf(u32 index) __func__, cpu, nominal_perf, cur_perf.nominal_perf, lowest_nonlinear_perf, cur_perf.lowest_nonlinear_perf, lowest_perf, cur_perf.lowest_perf); - goto skip_test; + return; } if (!((highest_perf >= nominal_perf) && @@ -188,15 +190,11 @@ static void amd_pstate_ut_check_perf(u32 index) pr_err("%s cpu%d highest=%d >= nominal=%d > lowest_nonlinear=%d > lowest=%d > 0, the formula is incorrect!\n", __func__, cpu, highest_perf, nominal_perf, lowest_nonlinear_perf, lowest_perf); - goto skip_test; + return; } - cpufreq_cpu_put(policy); } amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_PASS; - return; -skip_test: - cpufreq_cpu_put(policy); } /* @@ -207,10 +205,11 @@ static void amd_pstate_ut_check_perf(u32 index) static void amd_pstate_ut_check_freq(u32 index) { int cpu = 0; - struct cpufreq_policy *policy = NULL; struct amd_cpudata *cpudata = NULL; for_each_possible_cpu(cpu) { + struct cpufreq_policy *policy __free(put_cpufreq_policy) = NULL; + policy = cpufreq_cpu_get(cpu); if (!policy) break; @@ -224,14 +223,14 @@ static void amd_pstate_ut_check_freq(u32 index) pr_err("%s cpu%d max=%d >= nominal=%d > lowest_nonlinear=%d > min=%d > 0, the formula is incorrect!\n", __func__, cpu, policy->cpuinfo.max_freq, cpudata->nominal_freq, cpudata->lowest_nonlinear_freq, policy->cpuinfo.min_freq); - goto skip_test; + return; } if (cpudata->lowest_nonlinear_freq != policy->min) { amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL; pr_err("%s cpu%d cpudata_lowest_nonlinear_freq=%d policy_min=%d, they should be equal!\n", __func__, cpu, cpudata->lowest_nonlinear_freq, policy->min); - goto skip_test; + return; } if (cpudata->boost_supported) { @@ -243,20 +242,16 @@ static void amd_pstate_ut_check_freq(u32 index) pr_err("%s cpu%d policy_max=%d should be equal cpu_max=%d or cpu_nominal=%d !\n", __func__, cpu, policy->max, policy->cpuinfo.max_freq, cpudata->nominal_freq); - goto skip_test; + return; } } else { amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL; pr_err("%s cpu%d must support boost!\n", __func__, cpu); - goto skip_test; + return; } - cpufreq_cpu_put(policy); } amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_PASS; - return; -skip_test: - cpufreq_cpu_put(policy); } static int amd_pstate_set_mode(enum amd_pstate_mode mode) From patchwork Wed Feb 26 07:49:23 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Limonciello X-Patchwork-Id: 13991657 X-Patchwork-Delegate: mario.limonciello@amd.com Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3200A271839; Wed, 26 Feb 2025 07:49:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740556193; cv=none; b=hSed3ZGPCmNS4L+LyITqiYUFD8rXhY+cTnuBeEtgUH/LV4UJk5r1vkgcYifhce1d0O0edK3u6tSGcKvvY30YwwqKrlkavh683dxO5I2KGIDKJ54GuIWiC79QRZkIAZIqFf0XqRlAEo4SjVumXOTZuAFAzQl/IW1qpqb24/z34CY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740556193; c=relaxed/simple; bh=4fgPtjxQ+iiiTGlFmFYmmvw8hB/zd6dMGJt9QzGFgSw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GmrSeE0l89o7oIB7jbiaJFvUipk1BwMO7mt9yZZwEQpEZOxivrREa99Z25dIzCZElCvb7m8Vpa6fIzUd0AVUVun6yl/QQ4gfCL5OhZnrKI6PoJBlZw1TIp6BLj5L3rLUgnpyQ/h0333IhKMlyDadIwD157ld/CK2gjzNyURfXhI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=WjVfO5jX; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="WjVfO5jX" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 00491C4CEE9; Wed, 26 Feb 2025 07:49:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1740556193; bh=4fgPtjxQ+iiiTGlFmFYmmvw8hB/zd6dMGJt9QzGFgSw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WjVfO5jX7NcDN2Dm4WRjof428bwHTpK8Z5JDP1gf/Vm2RyfYssLjHUYO4nOuKCgSb CZtHWzp7LOOI7u8nhslwnSsfQzq45nPubADye5iRAfESke/5L6+O1ZDS9mlHvw/Aok 5T5RZ4kr+pEDZ3kjPR/bWCvKtKxs3fLKowKwPLf0xN4xifb70fPvxfTEH86bNX7Zf+ EPseZkVFdiyi86/gxz3qUWX2oWDhqJAYlWaBJEv0DWEQK8ygoMooERyrS24I2+Pwp3 wGyeAfuOc8SNa2lJtszbGlTV8GLsoHVlFkCiMc7HTgoqRQuTWbpjPVxc/nfOZUev9C oZ0w3sXaaIbng== From: Mario Limonciello To: "Gautham R . Shenoy" , Perry Yuan Cc: Dhananjay Ugwekar , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), linux-pm@vger.kernel.org (open list:CPU FREQUENCY SCALING FRAMEWORK), Mario Limonciello , Dhananjay Ugwekar Subject: [PATCH v5 08/19] cpufreq/amd-pstate-ut: Allow lowest nonlinear and lowest to be the same Date: Wed, 26 Feb 2025 01:49:23 -0600 Message-ID: <20250226074934.1667721-9-superm1@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250226074934.1667721-1-superm1@kernel.org> References: <20250226074934.1667721-1-superm1@kernel.org> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Mario Limonciello Several Ryzen AI processors support the exact same value for lowest nonlinear perf and lowest perf. Loosen up the unit tests to allow this scenario. Reviewed-by: Gautham R. Shenoy Reviewed-by: Dhananjay Ugwekar Signed-off-by: Mario Limonciello --- v5: * Add tag --- drivers/cpufreq/amd-pstate-ut.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/cpufreq/amd-pstate-ut.c b/drivers/cpufreq/amd-pstate-ut.c index e02672e67380a..b3693a4c28d26 100644 --- a/drivers/cpufreq/amd-pstate-ut.c +++ b/drivers/cpufreq/amd-pstate-ut.c @@ -184,7 +184,7 @@ static void amd_pstate_ut_check_perf(u32 index) if (!((highest_perf >= nominal_perf) && (nominal_perf > lowest_nonlinear_perf) && - (lowest_nonlinear_perf > lowest_perf) && + (lowest_nonlinear_perf >= lowest_perf) && (lowest_perf > 0))) { amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL; pr_err("%s cpu%d highest=%d >= nominal=%d > lowest_nonlinear=%d > lowest=%d > 0, the formula is incorrect!\n", @@ -217,7 +217,7 @@ static void amd_pstate_ut_check_freq(u32 index) if (!((policy->cpuinfo.max_freq >= cpudata->nominal_freq) && (cpudata->nominal_freq > cpudata->lowest_nonlinear_freq) && - (cpudata->lowest_nonlinear_freq > policy->cpuinfo.min_freq) && + (cpudata->lowest_nonlinear_freq >= policy->cpuinfo.min_freq) && (policy->cpuinfo.min_freq > 0))) { amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL; pr_err("%s cpu%d max=%d >= nominal=%d > lowest_nonlinear=%d > min=%d > 0, the formula is incorrect!\n", From patchwork Wed Feb 26 07:49:24 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Limonciello X-Patchwork-Id: 13991658 X-Patchwork-Delegate: mario.limonciello@amd.com Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 92937272924; Wed, 26 Feb 2025 07:49:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740556194; cv=none; b=DyyJQ3HZXo0RHM5V6b1qNTkNx3nFC8kwFNTpSTT+aIrokgwFXZ668T2dw2KEW+Cz20fTwg0Jk9xsZUbMLp4mdqf+189cFDdY/q9XeqA67/aIrIBu3K1/OpkELcFQa6MdNl8uYXrZ3RzG+b2i/6r/ejFv8tQ96C9vtZx/hHQ7aNY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740556194; c=relaxed/simple; bh=Q9vnlr16LPJsaaaQcIaxKEsHF/dgnRxRXNH3cGTXpUM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=uvvr48z1/+/4vw9KSUdDmhrxDjs/IKNKezAF0JKKCR6RpyuGWgcvljdoMX4yNIhRtyav7WKCAcDwJCHuapelCt5o+Twt/fSgisjvVUNLQSmXOyMlQwmlqbeCpeodkwYVF1skgTU/uUQmkETSGF4lNGZ4QUIz6YtNbgwtwUB0biI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=kaVL8ONB; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="kaVL8ONB" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 69ACAC4CED6; Wed, 26 Feb 2025 07:49:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1740556194; bh=Q9vnlr16LPJsaaaQcIaxKEsHF/dgnRxRXNH3cGTXpUM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kaVL8ONBSqDWRKKmOvqVYV3uVMrxgcbez4F2W+fPtF0v9bOa9nf/gDrRSkcJHzWgA LaYfyK3BOoIUnGL6hbsvzLeTm9C4sBF+hEol0GG87ItYk2UnSLjexwb3HtplBr5RYB fSgV1U9vAKMCQdiG5JdWlrT5pkhJT8MUKVHEtLRCTC+fbX+0SbcShLbR7xR84GXXvx KJdyVGpj3xgmMxS0s1iEXQh+8XvwWwryFkndVqpOFRvO1/y1Z2Ng/051ui73q2loBA 5Mki//Srdt9ExDppE9k8HD0NZ6aoRkSvHz+knK6tErH5OVJ/sJNqz0qbs2+bgazbUr Z9jbw4bp5NVmw== From: Mario Limonciello To: "Gautham R . Shenoy" , Perry Yuan Cc: Dhananjay Ugwekar , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), linux-pm@vger.kernel.org (open list:CPU FREQUENCY SCALING FRAMEWORK), Mario Limonciello , Dhananjay Ugwekar Subject: [PATCH v5 09/19] cpufreq/amd-pstate-ut: Drop SUCCESS and FAIL enums Date: Wed, 26 Feb 2025 01:49:24 -0600 Message-ID: <20250226074934.1667721-10-superm1@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250226074934.1667721-1-superm1@kernel.org> References: <20250226074934.1667721-1-superm1@kernel.org> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Mario Limonciello Enums are effectively used as a boolean and don't show the return value of the failing call. Instead of using enums switch to returning the actual return code from the unit test. Reviewed-by: Gautham R. Shenoy Reviewed-by: Dhananjay Ugwekar Signed-off-by: Mario Limonciello --- v5: * Add tag --- drivers/cpufreq/amd-pstate-ut.c | 143 ++++++++++++-------------------- 1 file changed, 55 insertions(+), 88 deletions(-) diff --git a/drivers/cpufreq/amd-pstate-ut.c b/drivers/cpufreq/amd-pstate-ut.c index b3693a4c28d26..cd9a472e8dc3c 100644 --- a/drivers/cpufreq/amd-pstate-ut.c +++ b/drivers/cpufreq/amd-pstate-ut.c @@ -32,30 +32,20 @@ #include "amd-pstate.h" -/* - * Abbreviations: - * amd_pstate_ut: used as a shortform for AMD P-State unit test. - * It helps to keep variable names smaller, simpler - */ -enum amd_pstate_ut_result { - AMD_PSTATE_UT_RESULT_PASS, - AMD_PSTATE_UT_RESULT_FAIL, -}; struct amd_pstate_ut_struct { const char *name; - void (*func)(u32 index); - enum amd_pstate_ut_result result; + int (*func)(u32 index); }; /* * Kernel module for testing the AMD P-State unit test */ -static void amd_pstate_ut_acpi_cpc_valid(u32 index); -static void amd_pstate_ut_check_enabled(u32 index); -static void amd_pstate_ut_check_perf(u32 index); -static void amd_pstate_ut_check_freq(u32 index); -static void amd_pstate_ut_check_driver(u32 index); +static int amd_pstate_ut_acpi_cpc_valid(u32 index); +static int amd_pstate_ut_check_enabled(u32 index); +static int amd_pstate_ut_check_perf(u32 index); +static int amd_pstate_ut_check_freq(u32 index); +static int amd_pstate_ut_check_driver(u32 index); static struct amd_pstate_ut_struct amd_pstate_ut_cases[] = { {"amd_pstate_ut_acpi_cpc_valid", amd_pstate_ut_acpi_cpc_valid }, @@ -78,51 +68,46 @@ static bool get_shared_mem(void) /* * check the _CPC object is present in SBIOS. */ -static void amd_pstate_ut_acpi_cpc_valid(u32 index) +static int amd_pstate_ut_acpi_cpc_valid(u32 index) { - if (acpi_cpc_valid()) - amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_PASS; - else { - amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL; + if (!acpi_cpc_valid()) { pr_err("%s the _CPC object is not present in SBIOS!\n", __func__); + return -EINVAL; } + + return 0; } -static void amd_pstate_ut_pstate_enable(u32 index) +/* + * check if amd pstate is enabled + */ +static int amd_pstate_ut_check_enabled(u32 index) { - int ret = 0; u64 cppc_enable = 0; + int ret; + + if (get_shared_mem()) + return 0; ret = rdmsrl_safe(MSR_AMD_CPPC_ENABLE, &cppc_enable); if (ret) { - amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL; pr_err("%s rdmsrl_safe MSR_AMD_CPPC_ENABLE ret=%d error!\n", __func__, ret); - return; + return ret; } - if (cppc_enable) - amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_PASS; - else { - amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL; + + if (!cppc_enable) { pr_err("%s amd pstate must be enabled!\n", __func__); + return -EINVAL; } -} -/* - * check if amd pstate is enabled - */ -static void amd_pstate_ut_check_enabled(u32 index) -{ - if (get_shared_mem()) - amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_PASS; - else - amd_pstate_ut_pstate_enable(index); + return 0; } /* * check if performance values are reasonable. * highest_perf >= nominal_perf > lowest_nonlinear_perf > lowest_perf > 0 */ -static void amd_pstate_ut_check_perf(u32 index) +static int amd_pstate_ut_check_perf(u32 index) { int cpu = 0, ret = 0; u32 highest_perf = 0, nominal_perf = 0, lowest_nonlinear_perf = 0, lowest_perf = 0; @@ -142,9 +127,8 @@ static void amd_pstate_ut_check_perf(u32 index) if (get_shared_mem()) { ret = cppc_get_perf_caps(cpu, &cppc_perf); if (ret) { - amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL; pr_err("%s cppc_get_perf_caps ret=%d error!\n", __func__, ret); - return; + return ret; } highest_perf = cppc_perf.highest_perf; @@ -154,9 +138,8 @@ static void amd_pstate_ut_check_perf(u32 index) } else { ret = rdmsrl_safe_on_cpu(cpu, MSR_AMD_CPPC_CAP1, &cap1); if (ret) { - amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL; pr_err("%s read CPPC_CAP1 ret=%d error!\n", __func__, ret); - return; + return ret; } highest_perf = AMD_CPPC_HIGHEST_PERF(cap1); @@ -169,32 +152,30 @@ static void amd_pstate_ut_check_perf(u32 index) if (highest_perf != cur_perf.highest_perf && !cpudata->hw_prefcore) { pr_err("%s cpu%d highest=%d %d highest perf doesn't match\n", __func__, cpu, highest_perf, cur_perf.highest_perf); - return; + return -EINVAL; } if (nominal_perf != cur_perf.nominal_perf || (lowest_nonlinear_perf != cur_perf.lowest_nonlinear_perf) || (lowest_perf != cur_perf.lowest_perf)) { - amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL; pr_err("%s cpu%d nominal=%d %d lowest_nonlinear=%d %d lowest=%d %d, they should be equal!\n", __func__, cpu, nominal_perf, cur_perf.nominal_perf, lowest_nonlinear_perf, cur_perf.lowest_nonlinear_perf, lowest_perf, cur_perf.lowest_perf); - return; + return -EINVAL; } if (!((highest_perf >= nominal_perf) && (nominal_perf > lowest_nonlinear_perf) && (lowest_nonlinear_perf >= lowest_perf) && (lowest_perf > 0))) { - amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL; pr_err("%s cpu%d highest=%d >= nominal=%d > lowest_nonlinear=%d > lowest=%d > 0, the formula is incorrect!\n", __func__, cpu, highest_perf, nominal_perf, lowest_nonlinear_perf, lowest_perf); - return; + return -EINVAL; } } - amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_PASS; + return 0; } /* @@ -202,7 +183,7 @@ static void amd_pstate_ut_check_perf(u32 index) * max_freq >= nominal_freq > lowest_nonlinear_freq > min_freq > 0 * check max freq when set support boost mode. */ -static void amd_pstate_ut_check_freq(u32 index) +static int amd_pstate_ut_check_freq(u32 index) { int cpu = 0; struct amd_cpudata *cpudata = NULL; @@ -219,39 +200,33 @@ static void amd_pstate_ut_check_freq(u32 index) (cpudata->nominal_freq > cpudata->lowest_nonlinear_freq) && (cpudata->lowest_nonlinear_freq >= policy->cpuinfo.min_freq) && (policy->cpuinfo.min_freq > 0))) { - amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL; pr_err("%s cpu%d max=%d >= nominal=%d > lowest_nonlinear=%d > min=%d > 0, the formula is incorrect!\n", __func__, cpu, policy->cpuinfo.max_freq, cpudata->nominal_freq, cpudata->lowest_nonlinear_freq, policy->cpuinfo.min_freq); - return; + return -EINVAL; } if (cpudata->lowest_nonlinear_freq != policy->min) { - amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL; pr_err("%s cpu%d cpudata_lowest_nonlinear_freq=%d policy_min=%d, they should be equal!\n", __func__, cpu, cpudata->lowest_nonlinear_freq, policy->min); - return; + return -EINVAL; } if (cpudata->boost_supported) { - if ((policy->max == policy->cpuinfo.max_freq) || - (policy->max == cpudata->nominal_freq)) - amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_PASS; - else { - amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL; + if ((policy->max != policy->cpuinfo.max_freq) && + (policy->max != cpudata->nominal_freq)) { pr_err("%s cpu%d policy_max=%d should be equal cpu_max=%d or cpu_nominal=%d !\n", __func__, cpu, policy->max, policy->cpuinfo.max_freq, cpudata->nominal_freq); - return; + return -EINVAL; } } else { - amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL; pr_err("%s cpu%d must support boost!\n", __func__, cpu); - return; + return -EINVAL; } } - amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_PASS; + return 0; } static int amd_pstate_set_mode(enum amd_pstate_mode mode) @@ -263,32 +238,28 @@ static int amd_pstate_set_mode(enum amd_pstate_mode mode) return amd_pstate_update_status(mode_str, strlen(mode_str)); } -static void amd_pstate_ut_check_driver(u32 index) +static int amd_pstate_ut_check_driver(u32 index) { enum amd_pstate_mode mode1, mode2 = AMD_PSTATE_DISABLE; - int ret; for (mode1 = AMD_PSTATE_DISABLE; mode1 < AMD_PSTATE_MAX; mode1++) { - ret = amd_pstate_set_mode(mode1); + int ret = amd_pstate_set_mode(mode1); if (ret) - goto out; + return ret; for (mode2 = AMD_PSTATE_DISABLE; mode2 < AMD_PSTATE_MAX; mode2++) { if (mode1 == mode2) continue; ret = amd_pstate_set_mode(mode2); - if (ret) - goto out; + if (ret) { + pr_err("%s: failed to update status for %s->%s\n", __func__, + amd_pstate_get_mode_string(mode1), + amd_pstate_get_mode_string(mode2)); + return ret; + } } } -out: - if (ret) - pr_warn("%s: failed to update status for %s->%s: %d\n", __func__, - amd_pstate_get_mode_string(mode1), - amd_pstate_get_mode_string(mode2), ret); - - amd_pstate_ut_cases[index].result = ret ? - AMD_PSTATE_UT_RESULT_FAIL : - AMD_PSTATE_UT_RESULT_PASS; + + return 0; } static int __init amd_pstate_ut_init(void) @@ -296,16 +267,12 @@ static int __init amd_pstate_ut_init(void) u32 i = 0, arr_size = ARRAY_SIZE(amd_pstate_ut_cases); for (i = 0; i < arr_size; i++) { - amd_pstate_ut_cases[i].func(i); - switch (amd_pstate_ut_cases[i].result) { - case AMD_PSTATE_UT_RESULT_PASS: + int ret = amd_pstate_ut_cases[i].func(i); + + if (ret) + pr_err("%-4d %-20s\t fail: %d!\n", i+1, amd_pstate_ut_cases[i].name, ret); + else pr_info("%-4d %-20s\t success!\n", i+1, amd_pstate_ut_cases[i].name); - break; - case AMD_PSTATE_UT_RESULT_FAIL: - default: - pr_info("%-4d %-20s\t fail!\n", i+1, amd_pstate_ut_cases[i].name); - break; - } } return 0; From patchwork Wed Feb 26 07:49:25 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Limonciello X-Patchwork-Id: 13991659 X-Patchwork-Delegate: mario.limonciello@amd.com Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6D7D027424D; Wed, 26 Feb 2025 07:49:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740556196; cv=none; b=euEPE5KS/QAYyDYEAmjcYppTxOrKa4yx5uGIvBoTNSahbt3XQ0Jvuub/IZKamI/xMibKu1bqHd2GfVbO1iA86tmIaB481Rl5hyyv+gL7hnsjE/PiaS/GzUzYEQPjqr9ZCK/zFcp80N305AYxAKNDib3kmoW8lionqMtTYGzQi34= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740556196; c=relaxed/simple; bh=KEYOxg0MUUgNzwwYusMNxICKVX8fUtEKwCLiqVCxxxM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=agxvLzmQlEdjwzYqB9gPC4ur5AFfYhwHNpm9Jd+DdJdpUh49JXiPDik4Hc22z/8mBTgrGCuHTp8VUdEaKqY/1s19zNkIbFqSI+bcXQgOF1DKxNyxX/Or5amP9HYjb9bmh7lZGvYjdfJO3lTKLsgGekchdqxXVqISVA63ZYgoEG4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=mUVgzX42; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="mUVgzX42" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C6905C4CEED; Wed, 26 Feb 2025 07:49:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1740556195; bh=KEYOxg0MUUgNzwwYusMNxICKVX8fUtEKwCLiqVCxxxM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mUVgzX42/yZVrJrK5jUjseNUDbFm05QnN9TG5dEkzWFNOcKEvXviQaTCRrP9ptQXR 77fbBTsDyCJ8U25/yt1uM6iHQgvFmoi9Oefc91e+3gn89wODXvDLU5n3nyXc5vvqKP ZzKbfQ7K8aAsSglWjBMt6kkan8SIdIIoBRhID6W+MGdgpBo4GA/FTO57r7W75WRb0t 3SKJPgCtWWb3bM/61X7jhfOGmpw0rrshdL30p1iISGk17iA0BuBslXjXzylKABF3Mr 92G2aHJRqJR/UEllzJu9JY6z5deIVY6QJgsHTmBO8xW8Aaxt5Icz8p8w6sxQ6nibx3 AGITId6sFdjpg== From: Mario Limonciello To: "Gautham R . Shenoy" , Perry Yuan Cc: Dhananjay Ugwekar , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), linux-pm@vger.kernel.org (open list:CPU FREQUENCY SCALING FRAMEWORK), Mario Limonciello , Dhananjay Ugwekar Subject: [PATCH v5 10/19] cpufreq/amd-pstate-ut: Run on all of the correct CPUs Date: Wed, 26 Feb 2025 01:49:25 -0600 Message-ID: <20250226074934.1667721-11-superm1@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250226074934.1667721-1-superm1@kernel.org> References: <20250226074934.1667721-1-superm1@kernel.org> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Mario Limonciello If a CPU is missing a policy or one has been offlined then the unit test is skipped for the rest of the CPUs on the system. Instead; iterate online CPUs and skip any missing policies to allow continuing to test the rest of them. Reviewed-by: Gautham R. Shenoy Reviewed-by: Dhananjay Ugwekar Signed-off-by: Mario Limonciello --- v5: * Add tag v3: * Only check online CPUs * Update commit message --- drivers/cpufreq/amd-pstate-ut.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/cpufreq/amd-pstate-ut.c b/drivers/cpufreq/amd-pstate-ut.c index cd9a472e8dc3c..2ab3017d7a0bb 100644 --- a/drivers/cpufreq/amd-pstate-ut.c +++ b/drivers/cpufreq/amd-pstate-ut.c @@ -116,12 +116,12 @@ static int amd_pstate_ut_check_perf(u32 index) struct amd_cpudata *cpudata = NULL; union perf_cached cur_perf; - for_each_possible_cpu(cpu) { + for_each_online_cpu(cpu) { struct cpufreq_policy *policy __free(put_cpufreq_policy) = NULL; policy = cpufreq_cpu_get(cpu); if (!policy) - break; + continue; cpudata = policy->driver_data; if (get_shared_mem()) { @@ -188,12 +188,12 @@ static int amd_pstate_ut_check_freq(u32 index) int cpu = 0; struct amd_cpudata *cpudata = NULL; - for_each_possible_cpu(cpu) { + for_each_online_cpu(cpu) { struct cpufreq_policy *policy __free(put_cpufreq_policy) = NULL; policy = cpufreq_cpu_get(cpu); if (!policy) - break; + continue; cpudata = policy->driver_data; if (!((policy->cpuinfo.max_freq >= cpudata->nominal_freq) && From patchwork Wed Feb 26 07:49:26 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Limonciello X-Patchwork-Id: 13991660 X-Patchwork-Delegate: mario.limonciello@amd.com Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 636CE27426A; Wed, 26 Feb 2025 07:49:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740556197; cv=none; b=rL9CGME85apqfLsgGXXxrVS5+x0FvK3eggkGURcXZue6IPzJ4boi+DOm7DONc014QQTwxXm5JKZm8/Bv+iKsFOC7Vi0b0eRH/JdZGbrXIbscu7A8I7UmmrcSu4ubCXtijguY1TEOJEMu83R/S5FMeY54Vzny4mh811QnXdGV9Co= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740556197; c=relaxed/simple; bh=Ny+mf6wTXBWd26jscZCcfKEiG9iktam74gC2s8tYJNE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=WeTYu+52ITzvtwkTnp3y6ykzrGmPb6LSp4+4WMMTktKDFc1dXeGkBAI5UFtnUoGY335/5CDENmNUfhcBF1GwCR6keTBYxGGS0Hlr+USRKhm7zUsQJcyiHcnOsfFic7XSJickTEfIdzKSs/NTlm7198Sp7cGfYVNcwaGqBsI3gmE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=oxlAEZgx; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="oxlAEZgx" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 38273C4CED6; Wed, 26 Feb 2025 07:49:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1740556197; bh=Ny+mf6wTXBWd26jscZCcfKEiG9iktam74gC2s8tYJNE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=oxlAEZgxQ0J0A5gI3SDhoy/+gtYrZK85pRYTbN6XAOsD+lwRoaVNBnxSHhCTG3/CM ym9Q5zD1uaVro05T+/pqd0kliEmyNdVsvr8fp63S7ZhiW61EIlOD8+7XLZSMCkXD8/ 42FD0Sw7pGkeg4mPxJWouN+pMmAoawTtPnP4usQezZlSGMglD2zQVfS4NCpI+oBkh4 jQy7XzJu5fJEtKfQfqKltR+VP8O3OmCdVrn3ME/MNMJm22j9pladiPJ0RwC9zO0vQX 1v5BqI3tB9JxNbzzxr5jpSGtQbZXnMB0BQyLStiUsUcfm6D5mN1F26PNjhUqfNU1h+ uj67B79Frk9JQ== From: Mario Limonciello To: "Gautham R . Shenoy" , Perry Yuan Cc: Dhananjay Ugwekar , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), linux-pm@vger.kernel.org (open list:CPU FREQUENCY SCALING FRAMEWORK), Mario Limonciello , Dhananjay Ugwekar Subject: [PATCH v5 11/19] cpufreq/amd-pstate-ut: Adjust variable scope Date: Wed, 26 Feb 2025 01:49:26 -0600 Message-ID: <20250226074934.1667721-12-superm1@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250226074934.1667721-1-superm1@kernel.org> References: <20250226074934.1667721-1-superm1@kernel.org> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Mario Limonciello In amd_pstate_ut_check_freq() and amd_pstate_ut_check_perf() the cpudata variable is only needed in the scope of the for loop. Move it there. Reviewed-by: Gautham R. Shenoy Reviewed-by: Dhananjay Ugwekar Signed-off-by: Mario Limonciello --- v5: * Apply to amd_pstate_ut_check_perf() too * Add tag --- drivers/cpufreq/amd-pstate-ut.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/cpufreq/amd-pstate-ut.c b/drivers/cpufreq/amd-pstate-ut.c index 2ab3017d7a0bb..edc1475989e3d 100644 --- a/drivers/cpufreq/amd-pstate-ut.c +++ b/drivers/cpufreq/amd-pstate-ut.c @@ -113,11 +113,11 @@ static int amd_pstate_ut_check_perf(u32 index) u32 highest_perf = 0, nominal_perf = 0, lowest_nonlinear_perf = 0, lowest_perf = 0; u64 cap1 = 0; struct cppc_perf_caps cppc_perf; - struct amd_cpudata *cpudata = NULL; union perf_cached cur_perf; for_each_online_cpu(cpu) { struct cpufreq_policy *policy __free(put_cpufreq_policy) = NULL; + struct amd_cpudata *cpudata; policy = cpufreq_cpu_get(cpu); if (!policy) @@ -186,10 +186,10 @@ static int amd_pstate_ut_check_perf(u32 index) static int amd_pstate_ut_check_freq(u32 index) { int cpu = 0; - struct amd_cpudata *cpudata = NULL; for_each_online_cpu(cpu) { struct cpufreq_policy *policy __free(put_cpufreq_policy) = NULL; + struct amd_cpudata *cpudata; policy = cpufreq_cpu_get(cpu); if (!policy) From patchwork Wed Feb 26 07:49:27 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Limonciello X-Patchwork-Id: 13991661 X-Patchwork-Delegate: mario.limonciello@amd.com Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2E90B27FE7F; Wed, 26 Feb 2025 07:49:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740556199; cv=none; b=GPAAGqEeJyTt58+ziW1uco+UNgx5mlkG/d3ML2M4tFWcLj0o882lsAMaR2MX2oIZs7M6pITaTDxqjRnm4UjVlKQb4baqH2pvQcZyNRW8i/d6Hd8sWjgWgrR1+ky3nLSSLqpU0snkyafGiqkfquIzIJqev/7Hc/52tGkYXXBjrRo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740556199; c=relaxed/simple; bh=zUQLVSiK6mFv12RaGNpEy8Bq9plyBiFjeVEZJAYlASc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=M6bOYGso/+9I1nIOOW4+s1RcQXhmB2vY4CGvO+fuV7TMgpexJ9WWkB0T3Mfyg5osuz6bUqYcM9e/+LpTA6jc5msfNLYMj4lYI/1M/n7CHx4TqcpclEN1RKEKOK4OR6SlSio7/jyv2VSk5qiMxJU6t07JGZWyXrZlOzi6r2KS07M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=nNMcut6w; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="nNMcut6w" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 97A27C4CEE2; Wed, 26 Feb 2025 07:49:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1740556198; bh=zUQLVSiK6mFv12RaGNpEy8Bq9plyBiFjeVEZJAYlASc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nNMcut6wFXBPTzxS+50jjqwMd6cS3pEWGPQXXfrokTr1DA8zE3tlVdf6JfHczmaqE P9iGuWzj3N4nJGf/UiQabySVuFNq6TmbTisKdNTkvOhCkqTaWca/t/+IVLCk8Y2Mla qnWd6FffSIiXN8Fvd2zLzcDdjvzNUi/b6/b5EaOf7X6S5bapqywb7le7Q4I9Pud+fp eCZySv0gsppuHDIDYdnq4lqf6hYy1E24RRc7U/zIBWw29nMKG3awJe/KwqliVKvO4I XScqtqK+cWhq5grmZcYkrUW0/OCZk5HmZZ40FLsO1T4csMmjvOPgoTMao4Piy/D5Sw ReCy4qZH4/Nzw== From: Mario Limonciello To: "Gautham R . Shenoy" , Perry Yuan Cc: Dhananjay Ugwekar , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), linux-pm@vger.kernel.org (open list:CPU FREQUENCY SCALING FRAMEWORK), Mario Limonciello , Dhananjay Ugwekar Subject: [PATCH v5 12/19] cpufreq/amd-pstate: Replace all AMD_CPPC_* macros with masks Date: Wed, 26 Feb 2025 01:49:27 -0600 Message-ID: <20250226074934.1667721-13-superm1@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250226074934.1667721-1-superm1@kernel.org> References: <20250226074934.1667721-1-superm1@kernel.org> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Mario Limonciello Bitfield masks are easier to follow and less error prone. Reviewed-by: Dhananjay Ugwekar Reviewed-by: Gautham R. Shenoy Signed-off-by: Mario Limonciello --- v3: * Add tag * Add missing includes v2: * Add a comment in msr-index.h * Pick up tag --- arch/x86/include/asm/msr-index.h | 20 +++++++++++--------- arch/x86/kernel/acpi/cppc.c | 4 +++- drivers/cpufreq/amd-pstate-ut.c | 9 +++++---- drivers/cpufreq/amd-pstate.c | 16 ++++++---------- 4 files changed, 25 insertions(+), 24 deletions(-) diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h index c84930610c7e6..cfcc49e5cf925 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -701,15 +701,17 @@ #define MSR_AMD_CPPC_REQ 0xc00102b3 #define MSR_AMD_CPPC_STATUS 0xc00102b4 -#define AMD_CPPC_LOWEST_PERF(x) (((x) >> 0) & 0xff) -#define AMD_CPPC_LOWNONLIN_PERF(x) (((x) >> 8) & 0xff) -#define AMD_CPPC_NOMINAL_PERF(x) (((x) >> 16) & 0xff) -#define AMD_CPPC_HIGHEST_PERF(x) (((x) >> 24) & 0xff) - -#define AMD_CPPC_MAX_PERF(x) (((x) & 0xff) << 0) -#define AMD_CPPC_MIN_PERF(x) (((x) & 0xff) << 8) -#define AMD_CPPC_DES_PERF(x) (((x) & 0xff) << 16) -#define AMD_CPPC_ENERGY_PERF_PREF(x) (((x) & 0xff) << 24) +/* Masks for use with MSR_AMD_CPPC_CAP1 */ +#define AMD_CPPC_LOWEST_PERF_MASK GENMASK(7, 0) +#define AMD_CPPC_LOWNONLIN_PERF_MASK GENMASK(15, 8) +#define AMD_CPPC_NOMINAL_PERF_MASK GENMASK(23, 16) +#define AMD_CPPC_HIGHEST_PERF_MASK GENMASK(31, 24) + +/* Masks for use with MSR_AMD_CPPC_REQ */ +#define AMD_CPPC_MAX_PERF_MASK GENMASK(7, 0) +#define AMD_CPPC_MIN_PERF_MASK GENMASK(15, 8) +#define AMD_CPPC_DES_PERF_MASK GENMASK(23, 16) +#define AMD_CPPC_EPP_PERF_MASK GENMASK(31, 24) /* AMD Performance Counter Global Status and Control MSRs */ #define MSR_AMD64_PERF_CNTR_GLOBAL_STATUS 0xc0000300 diff --git a/arch/x86/kernel/acpi/cppc.c b/arch/x86/kernel/acpi/cppc.c index d745dd586303c..77bfb846490c0 100644 --- a/arch/x86/kernel/acpi/cppc.c +++ b/arch/x86/kernel/acpi/cppc.c @@ -4,6 +4,8 @@ * Copyright (c) 2016, Intel Corporation. */ +#include + #include #include #include @@ -149,7 +151,7 @@ int amd_get_highest_perf(unsigned int cpu, u32 *highest_perf) if (ret) goto out; - val = AMD_CPPC_HIGHEST_PERF(val); + val = FIELD_GET(AMD_CPPC_HIGHEST_PERF_MASK, val); } else { ret = cppc_get_highest_perf(cpu, &val); if (ret) diff --git a/drivers/cpufreq/amd-pstate-ut.c b/drivers/cpufreq/amd-pstate-ut.c index edc1475989e3d..e671bc7d15508 100644 --- a/drivers/cpufreq/amd-pstate-ut.c +++ b/drivers/cpufreq/amd-pstate-ut.c @@ -22,6 +22,7 @@ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt +#include #include #include #include @@ -142,10 +143,10 @@ static int amd_pstate_ut_check_perf(u32 index) return ret; } - highest_perf = AMD_CPPC_HIGHEST_PERF(cap1); - nominal_perf = AMD_CPPC_NOMINAL_PERF(cap1); - lowest_nonlinear_perf = AMD_CPPC_LOWNONLIN_PERF(cap1); - lowest_perf = AMD_CPPC_LOWEST_PERF(cap1); + highest_perf = FIELD_GET(AMD_CPPC_HIGHEST_PERF_MASK, cap1); + nominal_perf = FIELD_GET(AMD_CPPC_NOMINAL_PERF_MASK, cap1); + lowest_nonlinear_perf = FIELD_GET(AMD_CPPC_LOWNONLIN_PERF_MASK, cap1); + lowest_perf = FIELD_GET(AMD_CPPC_LOWEST_PERF_MASK, cap1); } cur_perf = READ_ONCE(cpudata->perf); diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c index 2cec8d7d92f51..c2260bbee4eb7 100644 --- a/drivers/cpufreq/amd-pstate.c +++ b/drivers/cpufreq/amd-pstate.c @@ -89,11 +89,6 @@ static bool cppc_enabled; static bool amd_pstate_prefcore = true; static struct quirk_entry *quirks; -#define AMD_CPPC_MAX_PERF_MASK GENMASK(7, 0) -#define AMD_CPPC_MIN_PERF_MASK GENMASK(15, 8) -#define AMD_CPPC_DES_PERF_MASK GENMASK(23, 16) -#define AMD_CPPC_EPP_PERF_MASK GENMASK(31, 24) - /* * AMD Energy Preference Performance (EPP) * The EPP is used in the CCLK DPM controller to drive @@ -439,12 +434,13 @@ static int msr_init_perf(struct amd_cpudata *cpudata) perf.highest_perf = numerator; perf.max_limit_perf = numerator; - perf.min_limit_perf = AMD_CPPC_LOWEST_PERF(cap1); - perf.nominal_perf = AMD_CPPC_NOMINAL_PERF(cap1); - perf.lowest_nonlinear_perf = AMD_CPPC_LOWNONLIN_PERF(cap1); - perf.lowest_perf = AMD_CPPC_LOWEST_PERF(cap1); + perf.min_limit_perf = FIELD_GET(AMD_CPPC_LOWEST_PERF_MASK, cap1); + perf.nominal_perf = FIELD_GET(AMD_CPPC_NOMINAL_PERF_MASK, cap1); + perf.lowest_nonlinear_perf = FIELD_GET(AMD_CPPC_LOWNONLIN_PERF_MASK, cap1); + perf.lowest_perf = FIELD_GET(AMD_CPPC_LOWEST_PERF_MASK, cap1); WRITE_ONCE(cpudata->perf, perf); - WRITE_ONCE(cpudata->prefcore_ranking, AMD_CPPC_HIGHEST_PERF(cap1)); + WRITE_ONCE(cpudata->prefcore_ranking, FIELD_GET(AMD_CPPC_HIGHEST_PERF_MASK, cap1)); + return 0; } From patchwork Wed Feb 26 07:49:28 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Limonciello X-Patchwork-Id: 13991662 X-Patchwork-Delegate: mario.limonciello@amd.com Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8CCA227426A; Wed, 26 Feb 2025 07:50:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740556200; cv=none; b=irWJEqsd7IDXFIc+FGxvATbeVDIrnNExo07gwG2yx0a4l32Bg9U2RD/dKQuYI0lz/wtDnB07qQG2YJV/5uEsJALPs/B49My7Mvnhlfzdo0k+R8Nsh3ZwvMlgsNATIyvEf4z9XutnYF4rC3bWMyezwTjrPDXa1zTsaOy7Gg45Lgg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740556200; c=relaxed/simple; bh=rU//cHwKIhLi6KZBHAg93+4ucs5Sgd4xwHdvD2vWArk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=uGLb+oAnCKBTlIMxWLL88kf5vwm2G9IBFFVgACM+NJgkaedBvF6AsNmDM8NpBWxYEznMZQTzlnd2mwtBoOKa84fypKAQjf439s0erUt0X5ZsWgbM0dTQLeKd3Ur2iow6rb/k/ICCQ2IVTGX0tU0JtBdDCyhHjYOYWqiHTHsWFsk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=tS0zs3ud; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="tS0zs3ud" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 02DD0C4CEE7; Wed, 26 Feb 2025 07:49:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1740556200; bh=rU//cHwKIhLi6KZBHAg93+4ucs5Sgd4xwHdvD2vWArk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tS0zs3ud6xw7wny9/SBrsrVUyVyVHsZhMS9SLzBpGXpNzANNfpY2Te6RZHBU6XUVS 5s7X134UoMWjA5WQ1mczNMnDVZbLV1zjE26olhMfP5qEEHShEJsocXiKrYLPC+DynD GCbaPgdQTnG9bd5LH16XIY3fPv62oPNOVYqtYkz75zguazACFB98U8dxzTD58jLR+I qKr3+uCLkjvjNBCyCZYOHrp8JG+peg/q8JVMb1QZu+0cmhXlCaUCZ7AHKqFuhYMpcy uJRtw0v/nN4qKl8j83bllAnb6M66xj6030p5rUkK6OlzVFFBL6tQb4OrjOCFDX1ukU K1YUW2/rK/SQw== From: Mario Limonciello To: "Gautham R . Shenoy" , Perry Yuan Cc: Dhananjay Ugwekar , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), linux-pm@vger.kernel.org (open list:CPU FREQUENCY SCALING FRAMEWORK), Mario Limonciello , Dhananjay Ugwekar Subject: [PATCH v5 13/19] cpufreq/amd-pstate: Cache CPPC request in shared mem case too Date: Wed, 26 Feb 2025 01:49:28 -0600 Message-ID: <20250226074934.1667721-14-superm1@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250226074934.1667721-1-superm1@kernel.org> References: <20250226074934.1667721-1-superm1@kernel.org> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Mario Limonciello In order to prevent a potential write for shmem_update_perf() cache the request into the cppc_req_cached variable normally only used for the MSR case. This adds symmetry into the code and potentially avoids extra writes. Reviewed-by: Dhananjay Ugwekar Reviewed-by: Gautham R. Shenoy Signed-off-by: Mario Limonciello --- drivers/cpufreq/amd-pstate.c | 22 +++++++++++++++++++++- 1 file changed, 21 insertions(+), 1 deletion(-) diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c index c2260bbee4eb7..0c686af5e062d 100644 --- a/drivers/cpufreq/amd-pstate.c +++ b/drivers/cpufreq/amd-pstate.c @@ -496,6 +496,8 @@ static int shmem_update_perf(struct amd_cpudata *cpudata, u8 min_perf, u8 des_perf, u8 max_perf, u8 epp, bool fast_switch) { struct cppc_perf_ctrls perf_ctrls; + u64 value, prev; + int ret; if (cppc_state == AMD_PSTATE_ACTIVE) { int ret = shmem_set_epp(cpudata, epp); @@ -504,11 +506,29 @@ static int shmem_update_perf(struct amd_cpudata *cpudata, u8 min_perf, return ret; } + value = prev = READ_ONCE(cpudata->cppc_req_cached); + + value &= ~(AMD_CPPC_MAX_PERF_MASK | AMD_CPPC_MIN_PERF_MASK | + AMD_CPPC_DES_PERF_MASK | AMD_CPPC_EPP_PERF_MASK); + value |= FIELD_PREP(AMD_CPPC_MAX_PERF_MASK, max_perf); + value |= FIELD_PREP(AMD_CPPC_DES_PERF_MASK, des_perf); + value |= FIELD_PREP(AMD_CPPC_MIN_PERF_MASK, min_perf); + value |= FIELD_PREP(AMD_CPPC_EPP_PERF_MASK, epp); + + if (value == prev) + return 0; + perf_ctrls.max_perf = max_perf; perf_ctrls.min_perf = min_perf; perf_ctrls.desired_perf = des_perf; - return cppc_set_perf(cpudata->cpu, &perf_ctrls); + ret = cppc_set_perf(cpudata->cpu, &perf_ctrls); + if (ret) + return ret; + + WRITE_ONCE(cpudata->cppc_req_cached, value); + + return 0; } static inline bool amd_pstate_sample(struct amd_cpudata *cpudata) From patchwork Wed Feb 26 07:49:29 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Limonciello X-Patchwork-Id: 13991663 X-Patchwork-Delegate: mario.limonciello@amd.com Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 973BC280A31; Wed, 26 Feb 2025 07:50:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740556201; cv=none; b=C/DsjWPxzBRC8A10/+9b2C2l9MYhhHzU9EkGtpNNF948hZDQA/4qU+yaMDjPTZ1OHAxVmVveISqGmO5gibhhcczJ3CCPbfSk3BK1zsfA60jm2B8cW4MMI9q09wr/5qJCbonp7ku0hK87SAIZJ0sGpmXMW3oWpW2bnsFNJ3q6vjM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740556201; c=relaxed/simple; bh=vtMxuWJwSrrAP3uQZh3pVmPApHmtY5Bfq3fnSMPKSbc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=a1xCS5NaV8mn6YOdaS87c8Md0s8d/wIv9a3YF3gKqk23v5n9Lw93FlaHF5Sif308oFd+v06ymjmZG5DyudIJ3CPqP2KZf5xjUsxl7Mp7yzswCx6JFdQPQztVpnVT8P2gqtHlUq+VCu44+1RwxkSFrFHFBuvTksiBivShWkzce/c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=kERbFl56; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="kERbFl56" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 611BFC4CEE2; Wed, 26 Feb 2025 07:50:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1740556201; bh=vtMxuWJwSrrAP3uQZh3pVmPApHmtY5Bfq3fnSMPKSbc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kERbFl56Fvdjtamvpt6fPf+wwXpDZwlcUQ6r7jasIOqtt5lmneZLjr+wDUnZQU3V2 DXBmK6a6EMCyDPhc9DpIqsV8bbx1dxoC9U1jj/3HJmjhKn036ooYA+3nXC4wVJUJmJ qgV+YB4x0xoajdBWi43tCagm/78EeG8yka9CteNK1Uu675fbTZKt9LSbhCAYDxDlJc 2QuUff+sjC3qhCzNqoI5NIcJW4Viqwp3ee8ItUM04MNrH+cH7aLdKDYaz56vJ0YRbH Az+fveL99iUekCXb5Ce6plg8/mwyhkn4XrprH5j8uO+SR6aulWOcimvKrS1pr52BKW 2J5GmwajLtlUQ== From: Mario Limonciello To: "Gautham R . Shenoy" , Perry Yuan Cc: Dhananjay Ugwekar , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), linux-pm@vger.kernel.org (open list:CPU FREQUENCY SCALING FRAMEWORK), Mario Limonciello , Dhananjay Ugwekar Subject: [PATCH v5 14/19] cpufreq/amd-pstate: Move all EPP tracing into *_update_perf and *_set_epp functions Date: Wed, 26 Feb 2025 01:49:29 -0600 Message-ID: <20250226074934.1667721-15-superm1@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250226074934.1667721-1-superm1@kernel.org> References: <20250226074934.1667721-1-superm1@kernel.org> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Mario Limonciello The EPP tracing is done by the caller today, but this precludes the information about whether the CPPC request has changed. Move it into the update_perf and set_epp functions and include information about whether the request has changed from the last one. amd_pstate_update_perf() and amd_pstate_set_epp() now require the policy as an argument instead of the cpudata. Reviewed-by: Dhananjay Ugwekar Reviewed-by: Gautham R. Shenoy Signed-off-by: Mario Limonciello --- v4: * Drop unused variables v3: * Add tag * Update commit message --- drivers/cpufreq/amd-pstate-trace.h | 13 +++- drivers/cpufreq/amd-pstate.c | 118 +++++++++++++++++------------ 2 files changed, 80 insertions(+), 51 deletions(-) diff --git a/drivers/cpufreq/amd-pstate-trace.h b/drivers/cpufreq/amd-pstate-trace.h index f457d4af2c62e..32e1bdc588c52 100644 --- a/drivers/cpufreq/amd-pstate-trace.h +++ b/drivers/cpufreq/amd-pstate-trace.h @@ -90,7 +90,8 @@ TRACE_EVENT(amd_pstate_epp_perf, u8 epp, u8 min_perf, u8 max_perf, - bool boost + bool boost, + bool changed ), TP_ARGS(cpu_id, @@ -98,7 +99,8 @@ TRACE_EVENT(amd_pstate_epp_perf, epp, min_perf, max_perf, - boost), + boost, + changed), TP_STRUCT__entry( __field(unsigned int, cpu_id) @@ -107,6 +109,7 @@ TRACE_EVENT(amd_pstate_epp_perf, __field(u8, min_perf) __field(u8, max_perf) __field(bool, boost) + __field(bool, changed) ), TP_fast_assign( @@ -116,15 +119,17 @@ TRACE_EVENT(amd_pstate_epp_perf, __entry->min_perf = min_perf; __entry->max_perf = max_perf; __entry->boost = boost; + __entry->changed = changed; ), - TP_printk("cpu%u: [%hhu<->%hhu]/%hhu, epp=%hhu, boost=%u", + TP_printk("cpu%u: [%hhu<->%hhu]/%hhu, epp=%hhu, boost=%u, changed=%u", (unsigned int)__entry->cpu_id, (u8)__entry->min_perf, (u8)__entry->max_perf, (u8)__entry->highest_perf, (u8)__entry->epp, - (bool)__entry->boost + (bool)__entry->boost, + (bool)__entry->changed ) ); diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c index 0c686af5e062d..66b61ce124e21 100644 --- a/drivers/cpufreq/amd-pstate.c +++ b/drivers/cpufreq/amd-pstate.c @@ -228,9 +228,10 @@ static u8 shmem_get_epp(struct amd_cpudata *cpudata) return FIELD_GET(AMD_CPPC_EPP_PERF_MASK, epp); } -static int msr_update_perf(struct amd_cpudata *cpudata, u8 min_perf, +static int msr_update_perf(struct cpufreq_policy *policy, u8 min_perf, u8 des_perf, u8 max_perf, u8 epp, bool fast_switch) { + struct amd_cpudata *cpudata = policy->driver_data; u64 value, prev; value = prev = READ_ONCE(cpudata->cppc_req_cached); @@ -242,6 +243,18 @@ static int msr_update_perf(struct amd_cpudata *cpudata, u8 min_perf, value |= FIELD_PREP(AMD_CPPC_MIN_PERF_MASK, min_perf); value |= FIELD_PREP(AMD_CPPC_EPP_PERF_MASK, epp); + if (trace_amd_pstate_epp_perf_enabled()) { + union perf_cached perf = READ_ONCE(cpudata->perf); + + trace_amd_pstate_epp_perf(cpudata->cpu, + perf.highest_perf, + epp, + min_perf, + max_perf, + policy->boost_enabled, + value != prev); + } + if (value == prev) return 0; @@ -256,24 +269,26 @@ static int msr_update_perf(struct amd_cpudata *cpudata, u8 min_perf, } WRITE_ONCE(cpudata->cppc_req_cached, value); - WRITE_ONCE(cpudata->epp_cached, epp); + if (epp != cpudata->epp_cached) + WRITE_ONCE(cpudata->epp_cached, epp); return 0; } DEFINE_STATIC_CALL(amd_pstate_update_perf, msr_update_perf); -static inline int amd_pstate_update_perf(struct amd_cpudata *cpudata, +static inline int amd_pstate_update_perf(struct cpufreq_policy *policy, u8 min_perf, u8 des_perf, u8 max_perf, u8 epp, bool fast_switch) { - return static_call(amd_pstate_update_perf)(cpudata, min_perf, des_perf, + return static_call(amd_pstate_update_perf)(policy, min_perf, des_perf, max_perf, epp, fast_switch); } -static int msr_set_epp(struct amd_cpudata *cpudata, u8 epp) +static int msr_set_epp(struct cpufreq_policy *policy, u8 epp) { + struct amd_cpudata *cpudata = policy->driver_data; u64 value, prev; int ret; @@ -281,6 +296,19 @@ static int msr_set_epp(struct amd_cpudata *cpudata, u8 epp) value &= ~AMD_CPPC_EPP_PERF_MASK; value |= FIELD_PREP(AMD_CPPC_EPP_PERF_MASK, epp); + if (trace_amd_pstate_epp_perf_enabled()) { + union perf_cached perf = cpudata->perf; + + trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf, + epp, + FIELD_GET(AMD_CPPC_MIN_PERF_MASK, + cpudata->cppc_req_cached), + FIELD_GET(AMD_CPPC_MAX_PERF_MASK, + cpudata->cppc_req_cached), + policy->boost_enabled, + value != prev); + } + if (value == prev) return 0; @@ -299,15 +327,29 @@ static int msr_set_epp(struct amd_cpudata *cpudata, u8 epp) DEFINE_STATIC_CALL(amd_pstate_set_epp, msr_set_epp); -static inline int amd_pstate_set_epp(struct amd_cpudata *cpudata, u8 epp) +static inline int amd_pstate_set_epp(struct cpufreq_policy *policy, u8 epp) { - return static_call(amd_pstate_set_epp)(cpudata, epp); + return static_call(amd_pstate_set_epp)(policy, epp); } -static int shmem_set_epp(struct amd_cpudata *cpudata, u8 epp) +static int shmem_set_epp(struct cpufreq_policy *policy, u8 epp) { - int ret; + struct amd_cpudata *cpudata = policy->driver_data; struct cppc_perf_ctrls perf_ctrls; + int ret; + + if (trace_amd_pstate_epp_perf_enabled()) { + union perf_cached perf = cpudata->perf; + + trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf, + epp, + FIELD_GET(AMD_CPPC_MIN_PERF_MASK, + cpudata->cppc_req_cached), + FIELD_GET(AMD_CPPC_MAX_PERF_MASK, + cpudata->cppc_req_cached), + policy->boost_enabled, + epp != cpudata->epp_cached); + } if (epp == cpudata->epp_cached) return 0; @@ -339,17 +381,7 @@ static int amd_pstate_set_energy_pref_index(struct cpufreq_policy *policy, return -EBUSY; } - if (trace_amd_pstate_epp_perf_enabled()) { - union perf_cached perf = READ_ONCE(cpudata->perf); - - trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf, - epp, - FIELD_GET(AMD_CPPC_MIN_PERF_MASK, cpudata->cppc_req_cached), - FIELD_GET(AMD_CPPC_MAX_PERF_MASK, cpudata->cppc_req_cached), - policy->boost_enabled); - } - - return amd_pstate_set_epp(cpudata, epp); + return amd_pstate_set_epp(policy, epp); } static inline int msr_cppc_enable(bool enable) @@ -492,15 +524,16 @@ static inline int amd_pstate_init_perf(struct amd_cpudata *cpudata) return static_call(amd_pstate_init_perf)(cpudata); } -static int shmem_update_perf(struct amd_cpudata *cpudata, u8 min_perf, +static int shmem_update_perf(struct cpufreq_policy *policy, u8 min_perf, u8 des_perf, u8 max_perf, u8 epp, bool fast_switch) { + struct amd_cpudata *cpudata = policy->driver_data; struct cppc_perf_ctrls perf_ctrls; u64 value, prev; int ret; if (cppc_state == AMD_PSTATE_ACTIVE) { - int ret = shmem_set_epp(cpudata, epp); + int ret = shmem_set_epp(policy, epp); if (ret) return ret; @@ -515,6 +548,18 @@ static int shmem_update_perf(struct amd_cpudata *cpudata, u8 min_perf, value |= FIELD_PREP(AMD_CPPC_MIN_PERF_MASK, min_perf); value |= FIELD_PREP(AMD_CPPC_EPP_PERF_MASK, epp); + if (trace_amd_pstate_epp_perf_enabled()) { + union perf_cached perf = READ_ONCE(cpudata->perf); + + trace_amd_pstate_epp_perf(cpudata->cpu, + perf.highest_perf, + epp, + min_perf, + max_perf, + policy->boost_enabled, + value != prev); + } + if (value == prev) return 0; @@ -592,7 +637,7 @@ static void amd_pstate_update(struct amd_cpudata *cpudata, u8 min_perf, cpudata->cpu, fast_switch); } - amd_pstate_update_perf(cpudata, min_perf, des_perf, max_perf, 0, fast_switch); + amd_pstate_update_perf(policy, min_perf, des_perf, max_perf, 0, fast_switch); } static int amd_pstate_verify(struct cpufreq_policy_data *policy_data) @@ -1531,7 +1576,7 @@ static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy) return ret; WRITE_ONCE(cpudata->cppc_req_cached, value); } - ret = amd_pstate_set_epp(cpudata, cpudata->epp_default); + ret = amd_pstate_set_epp(policy, cpudata->epp_default); if (ret) return ret; @@ -1572,14 +1617,8 @@ static int amd_pstate_epp_update_limit(struct cpufreq_policy *policy) epp = READ_ONCE(cpudata->epp_cached); perf = READ_ONCE(cpudata->perf); - if (trace_amd_pstate_epp_perf_enabled()) { - trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf, epp, - perf.min_limit_perf, - perf.max_limit_perf, - policy->boost_enabled); - } - return amd_pstate_update_perf(cpudata, perf.min_limit_perf, 0U, + return amd_pstate_update_perf(policy, perf.min_limit_perf, 0U, perf.max_limit_perf, epp, false); } @@ -1611,20 +1650,12 @@ static int amd_pstate_epp_set_policy(struct cpufreq_policy *policy) static int amd_pstate_epp_reenable(struct cpufreq_policy *policy) { - struct amd_cpudata *cpudata = policy->driver_data; - union perf_cached perf = READ_ONCE(cpudata->perf); int ret; ret = amd_pstate_cppc_enable(true); if (ret) pr_err("failed to enable amd pstate during resume, return %d\n", ret); - if (trace_amd_pstate_epp_perf_enabled()) { - trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf, - cpudata->epp_cached, - FIELD_GET(AMD_CPPC_MIN_PERF_MASK, cpudata->cppc_req_cached), - perf.highest_perf, policy->boost_enabled); - } return amd_pstate_epp_update_limit(policy); } @@ -1652,14 +1683,7 @@ static int amd_pstate_epp_cpu_offline(struct cpufreq_policy *policy) if (cpudata->suspended) return 0; - if (trace_amd_pstate_epp_perf_enabled()) { - trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf, - AMD_CPPC_EPP_BALANCE_POWERSAVE, - perf.lowest_perf, perf.lowest_perf, - policy->boost_enabled); - } - - return amd_pstate_update_perf(cpudata, perf.lowest_perf, 0, perf.lowest_perf, + return amd_pstate_update_perf(policy, perf.lowest_perf, 0, perf.lowest_perf, AMD_CPPC_EPP_BALANCE_POWERSAVE, false); } From patchwork Wed Feb 26 07:49:30 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Limonciello X-Patchwork-Id: 13991664 X-Patchwork-Delegate: mario.limonciello@amd.com Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 55DFB280A5F; Wed, 26 Feb 2025 07:50:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740556203; cv=none; b=uuPZCiZjBDwTihnmqlab9sYM3Asgoo6xitRneV9eQYPxDZrgIZLuD/9YZZSh1z1vawGx66fzSZ/b8LNzCavT5bMmWlGiy9tbD4ZEOouNT4gEz8I4saky3QNAr0/MHgSXS5iTITQv1lqJP9zGYTYFkrDObkB+Z9w2sHWqf5wOK5Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740556203; c=relaxed/simple; bh=lvNlhg8ZQGFHcYmuoWOZ6HwRqxeR0QKm+GESmZ3ZXpE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Ht1ZcoVwjPOKC2ZP6Rj5dRCszaOAI2zfY7FGRUnNACVurD9a6KInLgzFnyLRnp8Eavj/hfG4GbG/PQJFDBYdzjaP6Hj8iL5gQ6CLPybazQXYD7dRjoAkpWvINVScSE5WDq/EqdYu1r9WwZYHagtT0G7x0h24zbKTDtslwYsQ5ko= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=gct4VRQv; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="gct4VRQv" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C0DB4C4CED6; Wed, 26 Feb 2025 07:50:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1740556202; bh=lvNlhg8ZQGFHcYmuoWOZ6HwRqxeR0QKm+GESmZ3ZXpE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gct4VRQvPVCltKkLmjYChOt093JLrbochGFkCh20Lb4YkcGUu5vgbwwZQKpRvukkF ok11NPnD8CdWu2Ae/ap4blzlRRP3Zz5UoHAiahGk7Ki6YH7e5yNtl3wMy31IZMqNc7 nzS9QbPcQInT0q+sTdF5rYw63GeCp2iFx9ay//7Ox36kYq/PyRaWgki3GJav6LxS4Y JUTATrB9pi+bqsGBi8znpd4K1vEFZ3mZ50t/E/4JaMTKkNkrS+HdzYVsx97LAnF7FD KrKpHsyFG95Duz87YT7kIYLad2oqkr8yAAWzirMinABPUmdU1JY6wkgesPj/eBKgW7 k5JGwX6Eh9PNg== From: Mario Limonciello To: "Gautham R . Shenoy" , Perry Yuan Cc: Dhananjay Ugwekar , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), linux-pm@vger.kernel.org (open list:CPU FREQUENCY SCALING FRAMEWORK), Mario Limonciello , Dhananjay Ugwekar Subject: [PATCH v5 15/19] cpufreq/amd-pstate: Update cppc_req_cached for shared mem EPP writes Date: Wed, 26 Feb 2025 01:49:30 -0600 Message-ID: <20250226074934.1667721-16-superm1@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250226074934.1667721-1-superm1@kernel.org> References: <20250226074934.1667721-1-superm1@kernel.org> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Mario Limonciello On EPP only writes update the cached variable so that the min/max performance controls don't need to be updated again. Reviewed-by: Dhananjay Ugwekar Reviewed-by: Gautham R. Shenoy Signed-off-by: Mario Limonciello --- drivers/cpufreq/amd-pstate.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c index 66b61ce124e21..55b6231e6a092 100644 --- a/drivers/cpufreq/amd-pstate.c +++ b/drivers/cpufreq/amd-pstate.c @@ -336,6 +336,7 @@ static int shmem_set_epp(struct cpufreq_policy *policy, u8 epp) { struct amd_cpudata *cpudata = policy->driver_data; struct cppc_perf_ctrls perf_ctrls; + u64 value; int ret; if (trace_amd_pstate_epp_perf_enabled()) { @@ -362,6 +363,11 @@ static int shmem_set_epp(struct cpufreq_policy *policy, u8 epp) } WRITE_ONCE(cpudata->epp_cached, epp); + value = READ_ONCE(cpudata->cppc_req_cached); + value &= ~AMD_CPPC_EPP_PERF_MASK; + value |= FIELD_PREP(AMD_CPPC_EPP_PERF_MASK, epp); + WRITE_ONCE(cpudata->cppc_req_cached, value); + return ret; } From patchwork Wed Feb 26 07:49:31 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Limonciello X-Patchwork-Id: 13991665 X-Patchwork-Delegate: mario.limonciello@amd.com Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B6D9B26E17E; Wed, 26 Feb 2025 07:50:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740556204; cv=none; b=pAOLBl3Y/0DQSTQTrmCbxOR4nOYGTONfMdWIqTd711xIZ1pUgVM9uGoS/xXcp6h10dIZei4VB8oHW4kZ6cKVJvRHD3/QDbNlbmU4yRMmrTwyrJulG9ZtPFEJYXFn8wObICynufyy2rFBOuFkDUuK/83HGYIjx/fsaztx6my4iPU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740556204; c=relaxed/simple; bh=LdFL5aOQDU2ldqURtyrrF9S+KVf7RvVGOnxYwvzO5oY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=QeSbbSIONeM62hug3+A2AJHxVJUJAbfBuy2pzJdvWl66p5slTZF9TSTIbhTkcTAelF4HUbGZaIzEH4havRGh+OZoWyGx+uxIjyCJojx4J3gqHeef4IR9qPNhXqyDUJlxsGLctanKWOvk2cjk9sOceFktSklnYe4hxBKTC7HbWQ0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=haqOJUcZ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="haqOJUcZ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2DB74C4CEE7; Wed, 26 Feb 2025 07:50:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1740556204; bh=LdFL5aOQDU2ldqURtyrrF9S+KVf7RvVGOnxYwvzO5oY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=haqOJUcZiCQ4F1kyX/DIKUdHREYSzxjS5tcvWbL3z9cC65L6auHY9S694Jg8mauSG T9Am9e6Kfhgdt5m2AVtGJEoOTQq17nX9bJcUw2b9iSGiK3SSkPs7ecwdmAWNY0Bjg4 mOm+A02R8+rt6u/oAirBON369vuk8rNZCqnKnLToBv4z6QStA4R8hOSEmQgfQ/iDwQ 78jax51YjIKrqvh1x06bxbRFUn8EpHrh0GiTwVjYwBT34t5HcN8/rkNvjV2CSV24/I jlSJAB7zQrIXOSQAD3L+dJ+WfQg+ztCG1kt9cZV5awRBAQ/QSr/kIh841XrWSdFI9F OylNg+Ku8uzFw== From: Mario Limonciello To: "Gautham R . Shenoy" , Perry Yuan Cc: Dhananjay Ugwekar , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), linux-pm@vger.kernel.org (open list:CPU FREQUENCY SCALING FRAMEWORK), Mario Limonciello , Dhananjay Ugwekar Subject: [PATCH v5 16/19] cpufreq/amd-pstate: Drop debug statements for policy setting Date: Wed, 26 Feb 2025 01:49:31 -0600 Message-ID: <20250226074934.1667721-17-superm1@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250226074934.1667721-1-superm1@kernel.org> References: <20250226074934.1667721-1-superm1@kernel.org> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Mario Limonciello There are trace events that exist now for all amd-pstate modes that will output information right before programming to the hardware. This makes the existing debug statements unnecessary remaining overhead. Drop them. Reviewed-by: Dhananjay Ugwekar Reviewed-by: Gautham R. Shenoy Signed-off-by: Mario Limonciello --- drivers/cpufreq/amd-pstate.c | 4 ---- 1 file changed, 4 deletions(-) diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c index 55b6231e6a092..f0d9ee62cb30d 100644 --- a/drivers/cpufreq/amd-pstate.c +++ b/drivers/cpufreq/amd-pstate.c @@ -667,7 +667,6 @@ static int amd_pstate_verify(struct cpufreq_policy_data *policy_data) } cpufreq_verify_within_cpu_limits(policy_data); - pr_debug("policy_max =%d, policy_min=%d\n", policy_data->max, policy_data->min); return 0; } @@ -1636,9 +1635,6 @@ static int amd_pstate_epp_set_policy(struct cpufreq_policy *policy) if (!policy->cpuinfo.max_freq) return -ENODEV; - pr_debug("set_policy: cpuinfo.max %u policy->max %u\n", - policy->cpuinfo.max_freq, policy->max); - cpudata->policy = policy->policy; ret = amd_pstate_epp_update_limit(policy); From patchwork Wed Feb 26 07:49:32 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Limonciello X-Patchwork-Id: 13991666 X-Patchwork-Delegate: mario.limonciello@amd.com Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A61DF26E622; Wed, 26 Feb 2025 07:50:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740556205; cv=none; b=tLlwsICzKBbkm0btnp7yCFSHNcDTcuX7qW3i6KHShWFCIS04jZrwMJm44AxPdX7EbcDqqD2yZtTi/UO2PdDMzQTXOkpyMx74FwOeOpOKl7pOJsuuGpZslzHINU/b0L34az2nwbqL5k058lONoknQNnRQ6P6+PHi7JDxcnlLRVsA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740556205; c=relaxed/simple; bh=hSoP8YKlBzaIWPk988HCBosOMTnwKrYcedoh3R7B29Q=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=i5QfIkCKRqSeb4mR12Lpz5JhbbKBES+w48Q8lo9Aqazc2nYJ1VG6ytNNCBrShCi8GEJDZ+UTXkD+i9O+T2X2vgfIKrzBiv++wdfpQ7WTGSv4nx39cBBmbiw0VcHtiKle/kndHJ3MDCbx+LxlFBmeMaP0NcqMYwfFaxxj/DdD4T0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=nwISdukQ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="nwISdukQ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8CD65C4CED6; Wed, 26 Feb 2025 07:50:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1740556205; bh=hSoP8YKlBzaIWPk988HCBosOMTnwKrYcedoh3R7B29Q=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nwISdukQHivbvzA07w5y9agjyIxXf5opWOjNN3EFspMKeFIqLQ/dC3UbfcTXh4ne8 o5uFgA74CKkHhm4mhufOihwS5jzwwJbOuEzXhCaR69eVikugTAU9ZfwQCSLKoZr31O gICeux56GQV2C/TJp0NZWZ9R54COvlGy0SLGxG79hC+HoFAYvt663wcEGRo39POCBi R1QsrPH2Tzz5RO7cdMxO2QShrosoF9B4BVVwnS0XqfL9tPYcIA6mEd2BuA/PYyGTV2 4oSTszHlGknRvvM1KjTaXlBw/AO+C2OtUxyzOHdUU1sQ+XAIX7v8O5VbCGRFGtybXB qR9ScWagIzY8g== From: Mario Limonciello To: "Gautham R . Shenoy" , Perry Yuan Cc: Dhananjay Ugwekar , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), linux-pm@vger.kernel.org (open list:CPU FREQUENCY SCALING FRAMEWORK), Mario Limonciello Subject: [PATCH v5 17/19] cpufreq/amd-pstate: Rework CPPC enabling Date: Wed, 26 Feb 2025 01:49:32 -0600 Message-ID: <20250226074934.1667721-18-superm1@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250226074934.1667721-1-superm1@kernel.org> References: <20250226074934.1667721-1-superm1@kernel.org> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Mario Limonciello The CPPC enable register is configured as "write once". That is any future writes don't actually do anything. Because of this, all the cleanup paths that currently exist for CPPC disable are non-effective. Rework CPPC enable to only enable after all the CAP registers have been read to avoid enabling CPPC on CPUs with invalid _CPC or unpopulated MSRs. As the register is write once, remove all cleanup paths as well. Signed-off-by: Mario Limonciello Reviewed-by: Dhananjay Ugwekar --- v5: * Drop unnecessary extra code in shmem_cppc_enable() * Remove redundant tracing in store_energy_performance_preference() * Add missing call to amd_pstate_cppc_enable() in passive case * Leave cpudata->suspended alone in amd_pstate_epp_cpu_online() * Drop spurious whitespace v4: * Remove unnecessary amd_pstate_update_perf() call during online * Remove unnecessary if (ret) ret. * Drop amd_pstate_cpu_resume() * Drop unnecessary derefs v3: * Fixup for suspend/resume issue --- drivers/cpufreq/amd-pstate.c | 179 +++++++---------------------------- 1 file changed, 35 insertions(+), 144 deletions(-) diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c index f0d9ee62cb30d..89e6d32223c9b 100644 --- a/drivers/cpufreq/amd-pstate.c +++ b/drivers/cpufreq/amd-pstate.c @@ -85,7 +85,6 @@ static struct cpufreq_driver *current_pstate_driver; static struct cpufreq_driver amd_pstate_driver; static struct cpufreq_driver amd_pstate_epp_driver; static int cppc_state = AMD_PSTATE_UNDEFINED; -static bool cppc_enabled; static bool amd_pstate_prefcore = true; static struct quirk_entry *quirks; @@ -371,89 +370,21 @@ static int shmem_set_epp(struct cpufreq_policy *policy, u8 epp) return ret; } -static int amd_pstate_set_energy_pref_index(struct cpufreq_policy *policy, - int pref_index) +static inline int msr_cppc_enable(struct cpufreq_policy *policy) { - struct amd_cpudata *cpudata = policy->driver_data; - u8 epp; - - if (!pref_index) - epp = cpudata->epp_default; - else - epp = epp_values[pref_index]; - - if (epp > 0 && cpudata->policy == CPUFREQ_POLICY_PERFORMANCE) { - pr_debug("EPP cannot be set under performance policy\n"); - return -EBUSY; - } - - return amd_pstate_set_epp(policy, epp); -} - -static inline int msr_cppc_enable(bool enable) -{ - int ret, cpu; - unsigned long logical_proc_id_mask = 0; - - /* - * MSR_AMD_CPPC_ENABLE is write-once, once set it cannot be cleared. - */ - if (!enable) - return 0; - - if (enable == cppc_enabled) - return 0; - - for_each_present_cpu(cpu) { - unsigned long logical_id = topology_logical_package_id(cpu); - - if (test_bit(logical_id, &logical_proc_id_mask)) - continue; - - set_bit(logical_id, &logical_proc_id_mask); - - ret = wrmsrl_safe_on_cpu(cpu, MSR_AMD_CPPC_ENABLE, - enable); - if (ret) - return ret; - } - - cppc_enabled = enable; - return 0; + return wrmsrl_safe_on_cpu(policy->cpu, MSR_AMD_CPPC_ENABLE, 1); } -static int shmem_cppc_enable(bool enable) +static int shmem_cppc_enable(struct cpufreq_policy *policy) { - int cpu, ret = 0; - struct cppc_perf_ctrls perf_ctrls; - - if (enable == cppc_enabled) - return 0; - - for_each_present_cpu(cpu) { - ret = cppc_set_enable(cpu, enable); - if (ret) - return ret; - - /* Enable autonomous mode for EPP */ - if (cppc_state == AMD_PSTATE_ACTIVE) { - /* Set desired perf as zero to allow EPP firmware control */ - perf_ctrls.desired_perf = 0; - ret = cppc_set_perf(cpu, &perf_ctrls); - if (ret) - return ret; - } - } - - cppc_enabled = enable; - return ret; + return cppc_set_enable(policy->cpu, 1); } DEFINE_STATIC_CALL(amd_pstate_cppc_enable, msr_cppc_enable); -static inline int amd_pstate_cppc_enable(bool enable) +static inline int amd_pstate_cppc_enable(struct cpufreq_policy *policy) { - return static_call(amd_pstate_cppc_enable)(enable); + return static_call(amd_pstate_cppc_enable)(policy); } static int msr_init_perf(struct amd_cpudata *cpudata) @@ -1069,6 +1000,10 @@ static int amd_pstate_cpu_init(struct cpufreq_policy *policy) cpudata->nominal_freq, perf.highest_perf); + ret = amd_pstate_cppc_enable(policy); + if (ret) + goto free_cpudata1; + policy->boost_enabled = READ_ONCE(cpudata->boost_supported); /* It will be updated by governor */ @@ -1116,28 +1051,6 @@ static void amd_pstate_cpu_exit(struct cpufreq_policy *policy) kfree(cpudata); } -static int amd_pstate_cpu_resume(struct cpufreq_policy *policy) -{ - int ret; - - ret = amd_pstate_cppc_enable(true); - if (ret) - pr_err("failed to enable amd-pstate during resume, return %d\n", ret); - - return ret; -} - -static int amd_pstate_cpu_suspend(struct cpufreq_policy *policy) -{ - int ret; - - ret = amd_pstate_cppc_enable(false); - if (ret) - pr_err("failed to disable amd-pstate during suspend, return %d\n", ret); - - return ret; -} - /* Sysfs attributes */ /* @@ -1229,8 +1142,10 @@ static ssize_t show_energy_performance_available_preferences( static ssize_t store_energy_performance_preference( struct cpufreq_policy *policy, const char *buf, size_t count) { + struct amd_cpudata *cpudata = policy->driver_data; char str_preference[21]; ssize_t ret; + u8 epp; ret = sscanf(buf, "%20s", str_preference); if (ret != 1) @@ -1240,7 +1155,17 @@ static ssize_t store_energy_performance_preference( if (ret < 0) return -EINVAL; - ret = amd_pstate_set_energy_pref_index(policy, ret); + if (!ret) + epp = cpudata->epp_default; + else + epp = epp_values[ret]; + + if (epp > 0 && policy->policy == CPUFREQ_POLICY_PERFORMANCE) { + pr_debug("EPP cannot be set under performance policy\n"); + return -EBUSY; + } + + ret = amd_pstate_set_epp(policy, epp); return ret ? ret : count; } @@ -1273,7 +1198,6 @@ static ssize_t show_energy_performance_preference( static void amd_pstate_driver_cleanup(void) { - amd_pstate_cppc_enable(false); cppc_state = AMD_PSTATE_DISABLE; current_pstate_driver = NULL; } @@ -1307,14 +1231,6 @@ static int amd_pstate_register_driver(int mode) cppc_state = mode; - ret = amd_pstate_cppc_enable(true); - if (ret) { - pr_err("failed to enable cppc during amd-pstate driver registration, return %d\n", - ret); - amd_pstate_driver_cleanup(); - return ret; - } - /* at least one CPU supports CPB */ current_pstate_driver->boost_enabled = cpu_feature_enabled(X86_FEATURE_CPB); @@ -1554,11 +1470,15 @@ static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy) policy->cpuinfo.max_freq = policy->max = perf_to_freq(perf, cpudata->nominal_freq, perf.highest_perf); + policy->driver_data = cpudata; + + ret = amd_pstate_cppc_enable(policy); + if (ret) + goto free_cpudata1; /* It will be updated by governor */ policy->cur = policy->cpuinfo.min_freq; - policy->driver_data = cpudata; policy->boost_enabled = READ_ONCE(cpudata->boost_supported); @@ -1650,31 +1570,11 @@ static int amd_pstate_epp_set_policy(struct cpufreq_policy *policy) return 0; } -static int amd_pstate_epp_reenable(struct cpufreq_policy *policy) -{ - int ret; - - ret = amd_pstate_cppc_enable(true); - if (ret) - pr_err("failed to enable amd pstate during resume, return %d\n", ret); - - - return amd_pstate_epp_update_limit(policy); -} - static int amd_pstate_epp_cpu_online(struct cpufreq_policy *policy) { - struct amd_cpudata *cpudata = policy->driver_data; - int ret; - - pr_debug("AMD CPU Core %d going online\n", cpudata->cpu); + pr_debug("AMD CPU Core %d going online\n", policy->cpu); - ret = amd_pstate_epp_reenable(policy); - if (ret) - return ret; - cpudata->suspended = false; - - return 0; + return amd_pstate_cppc_enable(policy); } static int amd_pstate_epp_cpu_offline(struct cpufreq_policy *policy) @@ -1692,11 +1592,6 @@ static int amd_pstate_epp_cpu_offline(struct cpufreq_policy *policy) static int amd_pstate_epp_suspend(struct cpufreq_policy *policy) { struct amd_cpudata *cpudata = policy->driver_data; - int ret; - - /* avoid suspending when EPP is not enabled */ - if (cppc_state != AMD_PSTATE_ACTIVE) - return 0; /* invalidate to ensure it's rewritten during resume */ cpudata->cppc_req_cached = 0; @@ -1704,11 +1599,6 @@ static int amd_pstate_epp_suspend(struct cpufreq_policy *policy) /* set this flag to avoid setting core offline*/ cpudata->suspended = true; - /* disable CPPC in lowlevel firmware */ - ret = amd_pstate_cppc_enable(false); - if (ret) - pr_err("failed to suspend, return %d\n", ret); - return 0; } @@ -1717,8 +1607,12 @@ static int amd_pstate_epp_resume(struct cpufreq_policy *policy) struct amd_cpudata *cpudata = policy->driver_data; if (cpudata->suspended) { + int ret; + /* enable amd pstate from suspend state*/ - amd_pstate_epp_reenable(policy); + ret = amd_pstate_epp_update_limit(policy); + if (ret) + return ret; cpudata->suspended = false; } @@ -1733,8 +1627,6 @@ static struct cpufreq_driver amd_pstate_driver = { .fast_switch = amd_pstate_fast_switch, .init = amd_pstate_cpu_init, .exit = amd_pstate_cpu_exit, - .suspend = amd_pstate_cpu_suspend, - .resume = amd_pstate_cpu_resume, .set_boost = amd_pstate_set_boost, .update_limits = amd_pstate_update_limits, .name = "amd-pstate", @@ -1901,7 +1793,6 @@ static int __init amd_pstate_init(void) global_attr_free: cpufreq_unregister_driver(current_pstate_driver); - amd_pstate_cppc_enable(false); return ret; } device_initcall(amd_pstate_init); From patchwork Wed Feb 26 07:49:33 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Limonciello X-Patchwork-Id: 13991667 X-Patchwork-Delegate: mario.limonciello@amd.com Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1AD831E1DFE; Wed, 26 Feb 2025 07:50:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740556207; cv=none; b=kTCsr+2V5TSw3OoAoDLSHrED3FJlRYD3OHYQEOFPWe51NVVT3fE00MU4nabUGbJ0dO8ija3Rr8t8OFCnFAcJT1IX5yXHKGyS/KBqHj3aAZsT3SAc71e+xUn+SOWyqUav7XCwefGfZShjLSUnC76Wvqeknx3qZQy7Zl2cW4aN9hg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740556207; c=relaxed/simple; bh=tHMEHdR9gJv1rEz04S9UOsdOOKCypTfDr177SyS3vic=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=QJw2VoVz3tLQahjKExzD+WBPXFECq+7XJjRSaP0db9MuPZH6pvAXLUOHhNhp78LCVTFPF2qg4gkAcTMdmdHH/9apAforyqQkzhtzwAQkbY4R2rB77U5HB9e2LDQc3kgkvRgiYb5NuQr+6boNgucSwkzIxMo70jzceSBx0LwJijw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=mKWZX3Fd; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="mKWZX3Fd" Received: by smtp.kernel.org (Postfix) with ESMTPSA id CBD43C4CEEB; Wed, 26 Feb 2025 07:50:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1740556206; bh=tHMEHdR9gJv1rEz04S9UOsdOOKCypTfDr177SyS3vic=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mKWZX3FdOUzIRyYwncNihTYlaHP0SFukViM885w6/rNBCgx8VV7lnL4bAJaX9IZEa 8N9g6ZAJV77z6ea9x6trDL6iMHjMla0lXodED3DuQB+r72gyA4wCDMd8Rkm6wHtZ8T 8whKH9UslXE8BL5KthstMA7Ee9PPhUvZ762iJSjZGnMrhPT+J2WglR+n4YGxIvpwYn ihgkAZ5zOis7XqXkr7NLRG77Ij/0meHjL92sjvFf/NWnlFTp/S2+sdgHGkrtqTPk1Q 219MAaJudsSS34yZ5/lTLcTpZ9b8CngxBpm5GVAT/4T0Aw/6DbtlgBrCGR9JJM0M/W eW459rczexgdQ== From: Mario Limonciello To: "Gautham R . Shenoy" , Perry Yuan Cc: Dhananjay Ugwekar , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), linux-pm@vger.kernel.org (open list:CPU FREQUENCY SCALING FRAMEWORK), Mario Limonciello , Dhananjay Ugwekar Subject: [PATCH v5 18/19] cpufreq/amd-pstate: Stop caching EPP Date: Wed, 26 Feb 2025 01:49:33 -0600 Message-ID: <20250226074934.1667721-19-superm1@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250226074934.1667721-1-superm1@kernel.org> References: <20250226074934.1667721-1-superm1@kernel.org> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Mario Limonciello EPP values are cached in the cpudata structure per CPU. This is needless though because they are also cached in the CPPC request variable. Drop the separate cache for EPP values and always reference the CPPC request variable when needed. Reviewed-by: Dhananjay Ugwekar Reviewed-by: Gautham R. Shenoy Signed-off-by: Mario Limonciello --- v5: * Rebase on earlier patch changes --- drivers/cpufreq/amd-pstate.c | 19 ++++++++++--------- drivers/cpufreq/amd-pstate.h | 1 - 2 files changed, 10 insertions(+), 10 deletions(-) diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c index 89e6d32223c9b..24a1f9e129b61 100644 --- a/drivers/cpufreq/amd-pstate.c +++ b/drivers/cpufreq/amd-pstate.c @@ -268,8 +268,6 @@ static int msr_update_perf(struct cpufreq_policy *policy, u8 min_perf, } WRITE_ONCE(cpudata->cppc_req_cached, value); - if (epp != cpudata->epp_cached) - WRITE_ONCE(cpudata->epp_cached, epp); return 0; } @@ -318,7 +316,6 @@ static int msr_set_epp(struct cpufreq_policy *policy, u8 epp) } /* update both so that msr_update_perf() can effectively check */ - WRITE_ONCE(cpudata->epp_cached, epp); WRITE_ONCE(cpudata->cppc_req_cached, value); return ret; @@ -335,9 +332,12 @@ static int shmem_set_epp(struct cpufreq_policy *policy, u8 epp) { struct amd_cpudata *cpudata = policy->driver_data; struct cppc_perf_ctrls perf_ctrls; + u8 epp_cached; u64 value; int ret; + + epp_cached = FIELD_GET(AMD_CPPC_EPP_PERF_MASK, cpudata->cppc_req_cached); if (trace_amd_pstate_epp_perf_enabled()) { union perf_cached perf = cpudata->perf; @@ -348,10 +348,10 @@ static int shmem_set_epp(struct cpufreq_policy *policy, u8 epp) FIELD_GET(AMD_CPPC_MAX_PERF_MASK, cpudata->cppc_req_cached), policy->boost_enabled, - epp != cpudata->epp_cached); + epp != epp_cached); } - if (epp == cpudata->epp_cached) + if (epp == epp_cached) return 0; perf_ctrls.energy_perf = epp; @@ -360,7 +360,6 @@ static int shmem_set_epp(struct cpufreq_policy *policy, u8 epp) pr_debug("failed to set energy perf value (%d)\n", ret); return ret; } - WRITE_ONCE(cpudata->epp_cached, epp); value = READ_ONCE(cpudata->cppc_req_cached); value &= ~AMD_CPPC_EPP_PERF_MASK; @@ -1174,9 +1173,11 @@ static ssize_t show_energy_performance_preference( struct cpufreq_policy *policy, char *buf) { struct amd_cpudata *cpudata = policy->driver_data; - u8 preference; + u8 preference, epp; + + epp = FIELD_GET(AMD_CPPC_EPP_PERF_MASK, cpudata->cppc_req_cached); - switch (cpudata->epp_cached) { + switch (epp) { case AMD_CPPC_EPP_PERFORMANCE: preference = EPP_INDEX_PERFORMANCE; break; @@ -1539,7 +1540,7 @@ static int amd_pstate_epp_update_limit(struct cpufreq_policy *policy) if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE) epp = 0; else - epp = READ_ONCE(cpudata->epp_cached); + epp = FIELD_GET(AMD_CPPC_EPP_PERF_MASK, cpudata->cppc_req_cached); perf = READ_ONCE(cpudata->perf); diff --git a/drivers/cpufreq/amd-pstate.h b/drivers/cpufreq/amd-pstate.h index 1557e1afea79c..fbe1c08d3f061 100644 --- a/drivers/cpufreq/amd-pstate.h +++ b/drivers/cpufreq/amd-pstate.h @@ -102,7 +102,6 @@ struct amd_cpudata { bool hw_prefcore; /* EPP feature related attributes*/ - u8 epp_cached; u32 policy; bool suspended; u8 epp_default; From patchwork Wed Feb 26 07:49:34 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Limonciello X-Patchwork-Id: 13991668 X-Patchwork-Delegate: mario.limonciello@amd.com Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CE3E022422C; Wed, 26 Feb 2025 07:50:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740556208; cv=none; b=kDY6YOuLypIWnMfQJAwELB3Sb5hB0LggBIob9PGCHxSt9qH/BrPYpdyQbHp1YudURMCW66v6crDejdIUsi7rSVMATlJc7e6MKp7GURbfSih+ahx5gBhjZrb1VIKLx3Y5k/SeVkNh7yEHGkzgJDU5iKbfJxdnwr8I1YW1oMb24KE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740556208; c=relaxed/simple; bh=wy2qW87sv3CHOyK6U2e65npVmbKA0sNrgjJ+lZsr8Zg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ErmwBFpm08tZysovKX8dME3WQNFm6bTpYb5xqFiTZe/xzM51vDmbeKLYuvDumJJdaJ3QlTQAcUm5P9V5NmOjM5Uub0UTw8UYyqfKT1DkzPHcMPswRF/Ahnf0ZRFJUmRI52MB5JHMBtMSWQFtdozMiJp2Vg2NIXw8dfSrE9pX/80= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=TuJUCmBz; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="TuJUCmBz" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 47ABBC4CEE9; Wed, 26 Feb 2025 07:50:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1740556208; bh=wy2qW87sv3CHOyK6U2e65npVmbKA0sNrgjJ+lZsr8Zg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TuJUCmBzbN8UUQsXAEoah0Gyxhfyuao4yhEyHftpbZ+jVXyK32YpEMsE/g5z0GvRa yNzVLSUECefqJKID8TXckU5PQcjlyAlyH+0hri0wf4NJtNE5/yVKUnpC3mnHGOefbb uh0SGkhX/1s3AvgsvBTZD2rd1CtjESSkPWwHZ4n+hTnqu2KfXXelpMn6hKg1//+z8k b6RZle1ZXb7HHTa0ysOIfaejd4umXrIXsa/SMsZ/xSJPQdrq1FFJfG7KK5EGYn+ewp h42WwCCNYMrw1NvAVvplYma1o0cxd3qEA+i174RAw4McMBt1mmwDQHorKhv9LGbdPR qLaer6TO5ik3Q== From: Mario Limonciello To: "Gautham R . Shenoy" , Perry Yuan Cc: Dhananjay Ugwekar , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), linux-pm@vger.kernel.org (open list:CPU FREQUENCY SCALING FRAMEWORK), Mario Limonciello , Dhananjay Ugwekar Subject: [PATCH v5 19/19] cpufreq/amd-pstate: Drop actions in amd_pstate_epp_cpu_offline() Date: Wed, 26 Feb 2025 01:49:34 -0600 Message-ID: <20250226074934.1667721-20-superm1@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250226074934.1667721-1-superm1@kernel.org> References: <20250226074934.1667721-1-superm1@kernel.org> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Mario Limonciello When the CPU goes offline there is no need to change the CPPC request because the CPU will go into the deepest C-state it supports already. Actually changing the CPPC request when it goes offline messes up the cached values and can lead to the wrong values being restored when it comes back. Instead drop the actions and if the CPU comes back online let amd_pstate_epp_set_policy() restore it to expected values. Reviewed-by: Dhananjay Ugwekar Signed-off-by: Mario Limonciello --- v5: * Reword commit message * Add tag --- drivers/cpufreq/amd-pstate.c | 9 +-------- 1 file changed, 1 insertion(+), 8 deletions(-) diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c index 24a1f9e129b61..4a364ae9b56a1 100644 --- a/drivers/cpufreq/amd-pstate.c +++ b/drivers/cpufreq/amd-pstate.c @@ -1580,14 +1580,7 @@ static int amd_pstate_epp_cpu_online(struct cpufreq_policy *policy) static int amd_pstate_epp_cpu_offline(struct cpufreq_policy *policy) { - struct amd_cpudata *cpudata = policy->driver_data; - union perf_cached perf = READ_ONCE(cpudata->perf); - - if (cpudata->suspended) - return 0; - - return amd_pstate_update_perf(policy, perf.lowest_perf, 0, perf.lowest_perf, - AMD_CPPC_EPP_BALANCE_POWERSAVE, false); + return 0; } static int amd_pstate_epp_suspend(struct cpufreq_policy *policy)