From patchwork Fri Feb 21 09:20:12 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Taniya Das X-Patchwork-Id: 13985041 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 096EF201017; Fri, 21 Feb 2025 09:20:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740129648; cv=none; b=ASihFdwOB7fwkLbV/YAF04ougBOjJSc7Co9etaXUrcvULBRj1lfVZ4UIyNGv2z60h8Ml3x4eJZq+VhKCYj9Wtgj9/RfihbWVfym+pYm/Dwx2wUhQ69zw8yzNaoWtswhG2nsW18Qb9khTXlclTldTVEv/NkZxBoTXP4FMOMJmqdc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740129648; c=relaxed/simple; bh=KUKqqdLbYYz+YQxACtiTVdx8IQKImEqAZ44w0hhKWZs=; h=From:Date:Subject:MIME-Version:Content-Type:Message-ID:References: In-Reply-To:To:CC; b=KmPucTv1dDSU4ihjdgBHgBlCfCeZ1ekWAnSM4vYjqu9eyI5Uer+n/SYfWq3jMo9Z+hfCoSd59OJor7gbno9HgZ31HjrU7eyM4dzL/fXXMjXkA/O7R3EKDVusJ8xMizrAMCod9HCpLnE6ncEHMpR2+I8SFji5bxknuQf6AJ5x25Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com; spf=pass smtp.mailfrom=quicinc.com; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b=K5KYdWsk; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="K5KYdWsk" Received: from pps.filterd (m0279864.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 51L6GBLU020402; Fri, 21 Feb 2025 09:20:39 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= jVcnyrDqDVr1hNL05XJ3Q17WEb2JRVl/jFwHQ0FQgSM=; b=K5KYdWskhH1+6z9B qhVjMymWCQSanQH5JjZbKPoEJOuF+IHnnC5jGkiIEa4w/moBVlFF6Tq1OkzQNvVq 6vCVyWpjdYvwgLNEoHs0j9t9ortClGGc/if+Oer8utMtg21a51n2Mnw1t2A/DprE JoJe04imkzQf99HRPFnda4jBGJtfcZ5R3TMzDxHxfiFlG7tHHkPMEFSfIXrxWg8d 6kzpX8SUoxKgOQM5l6lJHHSQXZiFMtyWuxkVYVCfAm9AumfLwkkYbWdYe4rI+64j YgzfBXXOZJ17dSXD1ZcNzCXmwgCMQ5e5cJj33kRmy9aun8HStEhLR6eTJOJP3OFM CKQK9A== Received: from nalasppmta05.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 44x06t440f-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 21 Feb 2025 09:20:39 +0000 (GMT) Received: from nalasex01a.na.qualcomm.com (nalasex01a.na.qualcomm.com [10.47.209.196]) by NALASPPMTA05.qualcomm.com (8.18.1.2/8.18.1.2) with ESMTPS id 51L9Kcs9029570 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 21 Feb 2025 09:20:38 GMT Received: from hu-tdas-hyd.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Fri, 21 Feb 2025 01:20:33 -0800 From: Taniya Das Date: Fri, 21 Feb 2025 14:50:12 +0530 Subject: [PATCH v5 01/10] clk: qcom: clk-alpha-pll: Add support for dynamic update for slewing PLLs Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20250221-qcs615-v5-mm-cc-v5-1-b6d9ddf2f28d@quicinc.com> References: <20250221-qcs615-v5-mm-cc-v5-0-b6d9ddf2f28d@quicinc.com> In-Reply-To: <20250221-qcs615-v5-mm-cc-v5-0-b6d9ddf2f28d@quicinc.com> To: Bjorn Andersson , Michael Turquette , Stephen Boyd , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon CC: Ajit Pandey , Imran Shaik , Jagadeesh Kona , , , , , , Taniya Das X-Mailer: b4 0.15-dev-aa3f6 X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nalasex01a.na.qualcomm.com (10.47.209.196) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: XlWle79BZWoDUj9MTTMkNpxsxpathqT2 X-Proofpoint-GUID: XlWle79BZWoDUj9MTTMkNpxsxpathqT2 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1057,Hydra:6.0.680,FMLib:17.12.68.34 definitions=2025-02-21_01,2025-02-20_02,2024-11-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 adultscore=0 clxscore=1015 spamscore=0 malwarescore=0 priorityscore=1501 bulkscore=0 impostorscore=0 mlxlogscore=999 phishscore=0 suspectscore=0 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2502100000 definitions=main-2502210068 The alpha PLLs which slew to a new frequency at runtime would require the PLL to calibrate at the mid point of the VCO. Add the new PLL ops which can support the slewing of the PLL to a new frequency. Reviewed-by: Imran Shaik Signed-off-by: Taniya Das --- drivers/clk/qcom/clk-alpha-pll.c | 170 +++++++++++++++++++++++++++++++++++++++ drivers/clk/qcom/clk-alpha-pll.h | 1 + 2 files changed, 171 insertions(+) diff --git a/drivers/clk/qcom/clk-alpha-pll.c b/drivers/clk/qcom/clk-alpha-pll.c index cec0afea8e446010f0d4140d4ef63121706dde47..7d784db8b7441e886d94ded1d3e3258dda46674c 100644 --- a/drivers/clk/qcom/clk-alpha-pll.c +++ b/drivers/clk/qcom/clk-alpha-pll.c @@ -2960,3 +2960,173 @@ const struct clk_ops clk_alpha_pll_regera_ops = { .set_rate = clk_zonda_pll_set_rate, }; EXPORT_SYMBOL_GPL(clk_alpha_pll_regera_ops); + +static int clk_alpha_pll_slew_update(struct clk_alpha_pll *pll) +{ + int ret; + u32 val; + + regmap_update_bits(pll->clkr.regmap, PLL_MODE(pll), PLL_UPDATE, PLL_UPDATE); + regmap_read(pll->clkr.regmap, PLL_MODE(pll), &val); + + ret = wait_for_pll_update(pll); + if (ret) + return ret; + /* + * Hardware programming mandates a wait of at least 570ns before polling the LOCK + * detect bit. Have a delay of 1us just to be safe. + */ + mb(); + udelay(1); + + return wait_for_pll_enable_lock(pll); +} + +static int clk_alpha_pll_slew_set_rate(struct clk_hw *hw, unsigned long rate, + unsigned long parent_rate) +{ + struct clk_alpha_pll *pll = to_clk_alpha_pll(hw); + unsigned long freq_hz; + const struct pll_vco *curr_vco, *vco; + u32 l, alpha_width = pll_alpha_width(pll); + u64 a; + + freq_hz = alpha_pll_round_rate(rate, parent_rate, &l, &a, alpha_width); + if (freq_hz != rate) { + pr_err("alpha_pll: Call clk_set_rate with rounded rates!\n"); + return -EINVAL; + } + + curr_vco = alpha_pll_find_vco(pll, clk_hw_get_rate(hw)); + if (!curr_vco) { + pr_err("alpha pll: not in a valid vco range\n"); + return -EINVAL; + } + + vco = alpha_pll_find_vco(pll, freq_hz); + if (!vco) { + pr_err("alpha pll: not in a valid vco range\n"); + return -EINVAL; + } + + /* + * Dynamic pll update will not support switching frequencies across + * vco ranges. In those cases fall back to normal alpha set rate. + */ + if (curr_vco->val != vco->val) + return clk_alpha_pll_set_rate(hw, rate, parent_rate); + + a = a << (ALPHA_REG_BITWIDTH - ALPHA_BITWIDTH); + + regmap_write(pll->clkr.regmap, PLL_L_VAL(pll), l); + regmap_write(pll->clkr.regmap, PLL_ALPHA_VAL(pll), a); + regmap_write(pll->clkr.regmap, PLL_ALPHA_VAL_U(pll), a >> 32); + + /* Ensure that the write above goes through before proceeding. */ + mb(); + + if (clk_hw_is_enabled(hw)) + return clk_alpha_pll_slew_update(pll); + + return 0; +} + +/* + * Slewing plls should be bought up at frequency which is in the middle of the + * desired VCO range. So after bringing up the pll at calibration freq, set it + * back to desired frequency(that was set by previous clk_set_rate). + */ +static int clk_alpha_pll_calibrate(struct clk_hw *hw) +{ + unsigned long calibration_freq, freq_hz; + struct clk_alpha_pll *pll = to_clk_alpha_pll(hw); + struct clk_hw *parent; + const struct pll_vco *vco; + u32 l, alpha_width = pll_alpha_width(pll); + int rc; + u64 a; + + parent = clk_hw_get_parent(hw); + if (!parent) { + pr_err("alpha pll: no valid parent found\n"); + return -EINVAL; + } + + vco = alpha_pll_find_vco(pll, clk_hw_get_rate(hw)); + if (!vco) { + pr_err("alpha pll: not in a valid vco range\n"); + return -EINVAL; + } + + /* + * As during slewing plls vco_sel won't be allowed to change, vco table + * should have only one entry table, i.e. index = 0, find the + * calibration frequency. + */ + calibration_freq = (pll->vco_table[0].min_freq + pll->vco_table[0].max_freq) / 2; + + freq_hz = alpha_pll_round_rate(calibration_freq, clk_hw_get_rate(parent), + &l, &a, alpha_width); + if (freq_hz != calibration_freq) { + pr_err("alpha_pll: call clk_set_rate with rounded rates!\n"); + return -EINVAL; + } + + /* Setup PLL for calibration frequency */ + a <<= (ALPHA_REG_BITWIDTH - ALPHA_BITWIDTH); + + regmap_write(pll->clkr.regmap, PLL_L_VAL(pll), l); + regmap_write(pll->clkr.regmap, PLL_ALPHA_VAL(pll), a); + regmap_write(pll->clkr.regmap, PLL_ALPHA_VAL_U(pll), a >> 32); + + regmap_update_bits(pll->clkr.regmap, PLL_USER_CTL(pll), PLL_VCO_MASK << PLL_VCO_SHIFT, + vco->val << PLL_VCO_SHIFT); + + /* Bringup the pll at calibration frequency */ + rc = clk_alpha_pll_enable(hw); + if (rc) { + pr_err("alpha pll calibration failed\n"); + return rc; + } + + /* + * PLL is already running at calibration frequency. + * So slew pll to the previously set frequency. + */ + freq_hz = alpha_pll_round_rate(clk_hw_get_rate(hw), + clk_hw_get_rate(parent), &l, &a, alpha_width); + + pr_debug("pll %s: setting back to required rate %lu, freq_hz %ld\n", + clk_hw_get_name(hw), clk_hw_get_rate(hw), freq_hz); + + /* Setup the PLL for the new frequency */ + a <<= (ALPHA_REG_BITWIDTH - ALPHA_BITWIDTH); + + regmap_write(pll->clkr.regmap, PLL_L_VAL(pll), l); + regmap_write(pll->clkr.regmap, PLL_ALPHA_VAL(pll), a); + regmap_write(pll->clkr.regmap, PLL_ALPHA_VAL_U(pll), a >> 32); + + regmap_update_bits(pll->clkr.regmap, PLL_USER_CTL(pll), PLL_ALPHA_EN, PLL_ALPHA_EN); + + return clk_alpha_pll_slew_update(pll); +} + +static int clk_alpha_pll_slew_enable(struct clk_hw *hw) +{ + int rc; + + rc = clk_alpha_pll_calibrate(hw); + if (rc) + return rc; + + return clk_alpha_pll_enable(hw); +} + +const struct clk_ops clk_alpha_pll_slew_ops = { + .enable = clk_alpha_pll_slew_enable, + .disable = clk_alpha_pll_disable, + .recalc_rate = clk_alpha_pll_recalc_rate, + .round_rate = clk_alpha_pll_round_rate, + .set_rate = clk_alpha_pll_slew_set_rate, +}; +EXPORT_SYMBOL(clk_alpha_pll_slew_ops); diff --git a/drivers/clk/qcom/clk-alpha-pll.h b/drivers/clk/qcom/clk-alpha-pll.h index 79aca8525262211ae5295245427d4540abf1e09a..1d19001605eb10fd8ae8041c56d951e928cbbe9f 100644 --- a/drivers/clk/qcom/clk-alpha-pll.h +++ b/drivers/clk/qcom/clk-alpha-pll.h @@ -204,6 +204,7 @@ extern const struct clk_ops clk_alpha_pll_rivian_evo_ops; #define clk_alpha_pll_postdiv_rivian_evo_ops clk_alpha_pll_postdiv_fabia_ops extern const struct clk_ops clk_alpha_pll_regera_ops; +extern const struct clk_ops clk_alpha_pll_slew_ops; void clk_alpha_pll_configure(struct clk_alpha_pll *pll, struct regmap *regmap, const struct alpha_pll_config *config);