From patchwork Mon Jul 13 12:39:40 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jon Hunter X-Patchwork-Id: 6778951 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id A61F5C05AC for ; Mon, 13 Jul 2015 12:43:05 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 5AFBE20546 for ; Mon, 13 Jul 2015 12:43:04 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 525D32051C for ; Mon, 13 Jul 2015 12:43:03 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZEd2U-0006Ce-8x; Mon, 13 Jul 2015 12:40:54 +0000 Received: from hqemgate14.nvidia.com ([216.228.121.143]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZEd2I-00064X-Nq for linux-arm-kernel@lists.infradead.org; Mon, 13 Jul 2015 12:40:43 +0000 Received: from hqnvupgp07.nvidia.com (Not Verified[216.228.121.13]) by hqemgate14.nvidia.com id ; Mon, 13 Jul 2015 05:40:49 -0700 Received: from hqemhub02.nvidia.com ([172.20.150.31]) by hqnvupgp07.nvidia.com (PGP Universal service); Mon, 13 Jul 2015 05:39:23 -0700 X-PGP-Universal: processed; by hqnvupgp07.nvidia.com on Mon, 13 Jul 2015 05:39:23 -0700 Received: from jonathanh-lm.nvidia.com (172.20.144.16) by hqemhub02.nvidia.com (172.20.150.31) with Microsoft SMTP Server (TLS) id 8.3.342.0; Mon, 13 Jul 2015 05:40:24 -0700 From: Jon Hunter To: Stephen Warren , Thierry Reding , Alexandre Courbot , Philipp Zabel , Peter De Schrijver , Prashant Gaikwad , =?UTF-8?q?Terje=20Bergstr=C3=B6m?= , Hans de Goede , Tejun Heo Subject: [PATCH V3 02/19] memory: tegra: Add MC flush support Date: Mon, 13 Jul 2015 13:39:40 +0100 Message-ID: <1436791197-32358-3-git-send-email-jonathanh@nvidia.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1436791197-32358-1-git-send-email-jonathanh@nvidia.com> References: <1436791197-32358-1-git-send-email-jonathanh@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150713_054042_807597_684E58F2 X-CRM114-Status: GOOD ( 21.86 ) X-Spam-Score: -8.3 (--------) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: devicetree@vger.kernel.org, Ulf Hansson , Vince Hsu , Kevin Hilman , linux-pm@vger.kernel.org, "Rafael J. Wysocki" , Jon Hunter , linux-tegra@vger.kernel.org, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-5.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The Tegra memory controller implements a flush feature to flush pending accesses and prevent further accesses from occurring. This feature is used when powering down IP blocks to ensure the IP block is in a good state. The flushes are organised by software groups and IP blocks are assigned in hardware to the different software groups. Add helper functions for requesting a handle to an MC flush for a given software group and enabling/disabling the MC flush itself. This is based upon a change by Vince Hsu . Signed-off-by: Jon Hunter --- drivers/memory/tegra/mc.c | 110 ++++++++++++++++++++++++++++++++++++++++++++++ drivers/memory/tegra/mc.h | 2 + include/soc/tegra/mc.h | 34 ++++++++++++++ 3 files changed, 146 insertions(+) diff --git a/drivers/memory/tegra/mc.c b/drivers/memory/tegra/mc.c index c71ede67e6c8..fb8da3d4caf4 100644 --- a/drivers/memory/tegra/mc.c +++ b/drivers/memory/tegra/mc.c @@ -7,6 +7,7 @@ */ #include +#include #include #include #include @@ -71,6 +72,107 @@ static const struct of_device_id tegra_mc_of_match[] = { }; MODULE_DEVICE_TABLE(of, tegra_mc_of_match); +const struct tegra_mc_flush *tegra_mc_flush_get(struct tegra_mc *mc, + unsigned int swgroup) +{ + const struct tegra_mc_flush *flush = NULL; + int i; + + mutex_lock(&mc->lock); + + for (i = 0; i < mc->soc->num_flushes; i++) { + if (mc->soc->flushes[i].swgroup == swgroup) { + if (mc->flush_reserved[i] == false) { + mc->flush_reserved[i] = true; + flush = &mc->soc->flushes[i]; + } + break; + } + } + + mutex_unlock(&mc->lock); + + return flush; +} +EXPORT_SYMBOL(tegra_mc_flush_get); + +static bool tegra_mc_flush_done(struct tegra_mc *mc, + const struct tegra_mc_flush *flush) +{ + unsigned long timeout = jiffies + msecs_to_jiffies(100); + int i; + u32 val; + + while (time_before(jiffies, timeout)) { + val = mc_readl(mc, flush->status); + + /* + * If the flush bit is still set it + * is not done and so wait then retry. + */ + if (val & BIT(flush->bit)) + goto retry; + + /* + * Depending on the tegra SoC, it may be necessary to read + * the status register multiple times to ensure the value + * read is correct. Some tegra devices have a HW issue where + * reading the status register shortly after writing the + * control register (on the order of 5 cycles) may return + * an incorrect value. + */ + for (i = 0; i < mc->soc->metastable_flush_reads; i++) { + if (mc_readl(mc, flush->status) != val) + goto retry; + } + + /* + * The flush is complete and so return. + */ + return 0; +retry: + udelay(10); + } + + return -ETIMEDOUT; +} + +int tegra_mc_flush(struct tegra_mc *mc, const struct tegra_mc_flush *flush, + bool enable) +{ + int ret = 0; + u32 val; + + if (!mc || !flush) + return -EINVAL; + + mutex_lock(&mc->lock); + + val = mc_readl(mc, flush->ctrl); + + if (enable) + val |= BIT(flush->bit); + else + val &= ~BIT(flush->bit); + + mc_writel(mc, val, flush->ctrl); + mc_readl(mc, flush->ctrl); + + /* + * If activating the flush, poll the + * status register until the flush is done. + */ + if (enable) + ret = tegra_mc_flush_done(mc, flush); + + mutex_unlock(&mc->lock); + + dev_dbg(mc->dev, "%s bit %d\n", __func__, flush->bit); + + return ret; +} +EXPORT_SYMBOL(tegra_mc_flush); + static int tegra_mc_setup_latency_allowance(struct tegra_mc *mc) { unsigned long long tick; @@ -359,6 +461,12 @@ static int tegra_mc_probe(struct platform_device *pdev) mc->soc = match->data; mc->dev = &pdev->dev; + mc->flush_reserved = devm_kcalloc(&pdev->dev, mc->soc->num_flushes, + sizeof(mc->flush_reserved), + GFP_KERNEL); + if (!mc->flush_reserved) + return -ENOMEM; + /* length of MC tick in nanoseconds */ mc->tick = 30; @@ -410,6 +518,8 @@ static int tegra_mc_probe(struct platform_device *pdev) return err; } + mutex_init(&mc->lock); + value = MC_INT_DECERR_MTS | MC_INT_SECERR_SEC | MC_INT_DECERR_VPR | MC_INT_INVALID_APB_ASID_UPDATE | MC_INT_INVALID_SMMU_PAGE | MC_INT_SECURITY_VIOLATION | MC_INT_DECERR_EMEM; diff --git a/drivers/memory/tegra/mc.h b/drivers/memory/tegra/mc.h index b7361b0a6696..0f59d49b735b 100644 --- a/drivers/memory/tegra/mc.h +++ b/drivers/memory/tegra/mc.h @@ -14,6 +14,8 @@ #include +#define MC_FLUSH_METASTABLE_READS 5 + static inline u32 mc_readl(struct tegra_mc *mc, unsigned long offset) { return readl(mc->regs + offset); diff --git a/include/soc/tegra/mc.h b/include/soc/tegra/mc.h index 1ab2813273cd..b634c6df79eb 100644 --- a/include/soc/tegra/mc.h +++ b/include/soc/tegra/mc.h @@ -45,6 +45,13 @@ struct tegra_mc_client { struct tegra_mc_la la; }; +struct tegra_mc_flush { + unsigned int swgroup; + unsigned int ctrl; + unsigned int status; + unsigned int bit; +}; + struct tegra_smmu_swgroup { const char *name; unsigned int swgroup; @@ -96,6 +103,10 @@ struct tegra_mc_soc { const struct tegra_mc_client *clients; unsigned int num_clients; + const struct tegra_mc_flush *flushes; + unsigned int num_flushes; + unsigned int metastable_flush_reads; + const unsigned long *emem_regs; unsigned int num_emem_regs; @@ -117,9 +128,32 @@ struct tegra_mc { struct tegra_mc_timing *timings; unsigned int num_timings; + + bool *flush_reserved; + + struct mutex lock; }; void tegra_mc_write_emem_configuration(struct tegra_mc *mc, unsigned long rate); unsigned int tegra_mc_get_emem_device_count(struct tegra_mc *mc); +#ifdef CONFIG_TEGRA_MC +const struct tegra_mc_flush *tegra_mc_flush_get(struct tegra_mc *mc, + unsigned int swgroup); +int tegra_mc_flush(struct tegra_mc *mc, const struct tegra_mc_flush *s, + bool enable); +#else +const struct tegra_mc_flush *tegra_mc_flush_get(struct tegra_mc *mc, + unsigned int swgroup) +{ + return NULL; +} + +int tegra_mc_flush(struct tegra_mc *mc, const struct tegra_mc_flush *s, + bool enable) +{ + return -ENOTSUPP; +} +#endif + #endif /* __SOC_TEGRA_MC_H__ */