From patchwork Tue Feb 6 16:34:04 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter De Schrijver X-Patchwork-Id: 10203353 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 7244160327 for ; Tue, 6 Feb 2018 16:37:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 68DE328C8F for ; Tue, 6 Feb 2018 16:37:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5D50B28C9E; Tue, 6 Feb 2018 16:37:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CAE2928C8F for ; Tue, 6 Feb 2018 16:37:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752369AbeBFQhe (ORCPT ); Tue, 6 Feb 2018 11:37:34 -0500 Received: from hqemgate14.nvidia.com ([216.228.121.143]:2243 "EHLO hqemgate14.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752422AbeBFQel (ORCPT ); Tue, 6 Feb 2018 11:34:41 -0500 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate14.nvidia.com id ; Tue, 06 Feb 2018 08:34:47 -0800 Received: from HQMAIL108.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Tue, 06 Feb 2018 08:34:40 -0800 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Tue, 06 Feb 2018 08:34:40 -0800 Received: from UKMAIL101.nvidia.com (10.26.138.13) by HQMAIL108.nvidia.com (172.18.146.13) with Microsoft SMTP Server (TLS) id 15.0.1347.2; Tue, 6 Feb 2018 16:34:40 +0000 Received: from tbergstrom-lnx.Nvidia.com (10.21.24.170) by UKMAIL101.nvidia.com (10.26.138.13) with Microsoft SMTP Server (TLS) id 15.0.1347.2; Tue, 6 Feb 2018 16:34:35 +0000 Received: from tbergstrom-lnx.nvidia.com (localhost [127.0.0.1]) by tbergstrom-lnx.Nvidia.com (Postfix) with ESMTP id 69EC2F80FBE; Tue, 6 Feb 2018 18:34:33 +0200 (EET) From: Peter De Schrijver To: , , , , , , , , , CC: Peter De Schrijver Subject: [PATCH v3 03/11] clk: tegra: dfll registration for multiple SoCs Date: Tue, 6 Feb 2018 18:34:04 +0200 Message-ID: <1517934852-23255-4-git-send-email-pdeschrijver@nvidia.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1517934852-23255-1-git-send-email-pdeschrijver@nvidia.com> References: <1517934852-23255-1-git-send-email-pdeschrijver@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 X-Originating-IP: [10.21.24.170] X-ClientProxiedBy: UKMAIL101.nvidia.com (10.26.138.13) To UKMAIL101.nvidia.com (10.26.138.13) Sender: linux-clk-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-clk@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In a future patch, support for the DFLL in Tegra210 will be introduced. This requires support for more than 1 set of CVB and CPU max frequency tables. Signed-off-by: Peter De Schrijver Reviewed-by: Jon Hunter --- drivers/clk/tegra/clk-tegra124-dfll-fcpu.c | 37 ++++++++++++++++++++++++------ 1 file changed, 30 insertions(+), 7 deletions(-) diff --git a/drivers/clk/tegra/clk-tegra124-dfll-fcpu.c b/drivers/clk/tegra/clk-tegra124-dfll-fcpu.c index e2dbb79..6486ad9 100644 --- a/drivers/clk/tegra/clk-tegra124-dfll-fcpu.c +++ b/drivers/clk/tegra/clk-tegra124-dfll-fcpu.c @@ -21,6 +21,7 @@ #include #include #include +#include #include #include #include @@ -29,8 +30,15 @@ #include "clk-dfll.h" #include "cvb.h" +struct dfll_fcpu_data { + const unsigned long *cpu_max_freq_table; + unsigned int cpu_max_freq_table_size; + const struct cvb_table *cpu_cvb_tables; + unsigned int cpu_cvb_tables_size; +}; + /* Maximum CPU frequency, indexed by CPU speedo id */ -static const unsigned long cpu_max_freq_table[] = { +static const unsigned long tegra124_cpu_max_freq_table[] = { [0] = 2014500000UL, [1] = 2320500000UL, [2] = 2116500000UL, @@ -80,6 +88,21 @@ }, }; +static const struct dfll_fcpu_data tegra124_dfll_fcpu_data = { + .cpu_max_freq_table = tegra124_cpu_max_freq_table, + .cpu_max_freq_table_size = ARRAY_SIZE(tegra124_cpu_max_freq_table), + .cpu_cvb_tables = tegra124_cpu_cvb_tables, + .cpu_cvb_tables_size = ARRAY_SIZE(tegra124_cpu_cvb_tables) +}; + +static const struct of_device_id tegra124_dfll_fcpu_of_match[] = { + { + .compatible = "nvidia,tegra124-dfll", + .data = &tegra124_dfll_fcpu_data, + }, + { }, +}; + static int get_alignment_from_regulator(struct device *dev, struct rail_alignment *align) { @@ -112,12 +135,17 @@ static int tegra124_dfll_fcpu_probe(struct platform_device *pdev) { int process_id, speedo_id, speedo_value, err; struct tegra_dfll_soc_data *soc; + const struct of_device_id *of_id; + const struct dfll_fcpu_data *fcpu_data; + + of_id = of_match_device(tegra124_dfll_fcpu_of_match, &pdev->dev); + fcpu_data = of_id->data; process_id = tegra_sku_info.cpu_process_id; speedo_id = tegra_sku_info.cpu_speedo_id; speedo_value = tegra_sku_info.cpu_speedo_value; - if (speedo_id >= ARRAY_SIZE(cpu_max_freq_table)) { + if (speedo_id >= fcpu_data->cpu_max_freq_table_size) { dev_err(&pdev->dev, "unknown max CPU freq for speedo_id=%d\n", speedo_id); return -ENODEV; @@ -172,11 +200,6 @@ static int tegra124_dfll_fcpu_remove(struct platform_device *pdev) return 0; } -static const struct of_device_id tegra124_dfll_fcpu_of_match[] = { - { .compatible = "nvidia,tegra124-dfll", }, - { }, -}; - static const struct dev_pm_ops tegra124_dfll_pm_ops = { SET_RUNTIME_PM_OPS(tegra_dfll_runtime_suspend, tegra_dfll_runtime_resume, NULL)