From patchwork Tue Apr 10 11:33:01 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 10332847 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 79C1060365 for ; Tue, 10 Apr 2018 11:32:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 68A8B28826 for ; Tue, 10 Apr 2018 11:32:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5D45E2882C; Tue, 10 Apr 2018 11:32:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id C399528913 for ; Tue, 10 Apr 2018 11:32:56 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id EF6816E3DC; Tue, 10 Apr 2018 11:32:53 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from lelnx193.ext.ti.com (lelnx193.ext.ti.com [198.47.27.77]) by gabe.freedesktop.org (Postfix) with ESMTPS id 052FB6E3DC for ; Tue, 10 Apr 2018 11:32:52 +0000 (UTC) Received: from dlelxv90.itg.ti.com ([172.17.2.17]) by lelnx193.ext.ti.com (8.15.1/8.15.1) with ESMTP id w3ABWoMp019786; Tue, 10 Apr 2018 06:32:50 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=ti.com; s=ti-com-17Q1; t=1523359970; bh=AMYcFxMOO+6VCs/rr5CN1T/j2ZIXcdraUL8j9k8tj+8=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=gdhFoL5AcqWoc9IEWbD8lAv8VcFnUJGavobRwEqcnKYYEqjxe2og5PgJJBPZOOu3a ORvG6jXiTg6k63rbd60ti/yjcMDXG0sH0TOLgvNj9QYryq/sg9BwFQHCqEWoO7NCUA zQ2RMz9O8BlkeDbMpJyMe6xTj9cyQAS9UrfRtbeE= Received: from DLEE105.ent.ti.com (dlee105.ent.ti.com [157.170.170.35]) by dlelxv90.itg.ti.com (8.14.3/8.13.8) with ESMTP id w3ABWnbq021372; Tue, 10 Apr 2018 06:32:49 -0500 Received: from DLEE115.ent.ti.com (157.170.170.26) by DLEE105.ent.ti.com (157.170.170.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1261.35; Tue, 10 Apr 2018 06:32:49 -0500 Received: from dlep33.itg.ti.com (157.170.170.75) by DLEE115.ent.ti.com (157.170.170.26) with Microsoft SMTP Server (version=TLS1_0, cipher=TLS_RSA_WITH_AES_256_CBC_SHA) id 15.1.1261.35 via Frontend Transport; Tue, 10 Apr 2018 06:32:49 -0500 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by dlep33.itg.ti.com (8.14.3/8.13.8) with ESMTP id w3ABWe6V009886; Tue, 10 Apr 2018 06:32:47 -0500 From: Peter Ujfalusi To: , Subject: [PATCH v3 3/3] drm/omap: partial workaround for DRA7xx DMM errata i878 Date: Tue, 10 Apr 2018 14:33:01 +0300 Message-ID: <20180410113301.18984-4-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180410113301.18984-1-peter.ujfalusi@ti.com> References: <20180410113301.18984-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: devicetree@vger.kernel.org, jsarha@ti.com, dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Tomi Valkeinen Errata i878 says that MPU should not be used to access RAM and DMM at the same time. As it's not possible to prevent MPU accessing RAM, we need to access DMM via a proxy. This patch changes: - DMM driver to access DMM registers via sDMA. Instead of doing a normal readl/writel call to read/write a register, we use sDMA to copy 4 bytes from/to the DMM registers. - When the i878 workaround is needed we use threaded irq. It is not a good practice to busy loop for completion of the DMA register access in the interrupt handler. The DMA transfer should not take long time to complete, but if something prevents the transfer to be completed we might end up waiting for 5 seconds. This patch provides only a partial workaround for i878, as not only DMM register reads/writes are affected, but also accesses to the DMM mapped buffers (framebuffers, usually). Signed-off-by: Tomi Valkeinen Signed-off-by: Peter Ujfalusi --- drivers/gpu/drm/omapdrm/omap_dmm_priv.h | 8 ++ drivers/gpu/drm/omapdrm/omap_dmm_tiler.c | 161 ++++++++++++++++++++++- 2 files changed, 165 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/omapdrm/omap_dmm_priv.h b/drivers/gpu/drm/omapdrm/omap_dmm_priv.h index c2785cc98dc9..a0164652db1e 100644 --- a/drivers/gpu/drm/omapdrm/omap_dmm_priv.h +++ b/drivers/gpu/drm/omapdrm/omap_dmm_priv.h @@ -155,10 +155,12 @@ struct refill_engine { struct dmm_platform_data { u32 cpu_cache_flags; + bool errata_i878_wa; }; struct dmm { struct device *dev; + dma_addr_t phys_base; void __iomem *base; int irq; @@ -189,6 +191,12 @@ struct dmm { struct list_head alloc_head; const struct dmm_platform_data *plat_data; + + bool dmm_workaround; + struct mutex wa_lock; + u32 *wa_dma_data; + dma_addr_t wa_dma_handle; + struct dma_chan *wa_dma_chan; }; #endif diff --git a/drivers/gpu/drm/omapdrm/omap_dmm_tiler.c b/drivers/gpu/drm/omapdrm/omap_dmm_tiler.c index 8671d06c0eb4..fad55f2faa47 100644 --- a/drivers/gpu/drm/omapdrm/omap_dmm_tiler.c +++ b/drivers/gpu/drm/omapdrm/omap_dmm_tiler.c @@ -18,12 +18,14 @@ #include #include #include +#include #include #include #include #include #include #include +#include #include /* platform_device() */ #include #include @@ -70,6 +72,7 @@ static const struct { [TILFMT_PAGE] = GEOM(SLOT_WIDTH_BITS, SLOT_HEIGHT_BITS, 1), }; +#define DMM_REG_SIZE 4 /* lookup table for registers w/ per-engine instances */ static const u32 reg[][4] = { @@ -79,14 +82,135 @@ static const u32 reg[][4] = { DMM_PAT_DESCR__2, DMM_PAT_DESCR__3}, }; +static int dmm_dma_copy(struct dmm *dmm, dma_addr_t src, dma_addr_t dst) +{ + struct dma_async_tx_descriptor *tx; + enum dma_status status; + dma_cookie_t cookie; + + tx = dmaengine_prep_dma_memcpy(dmm->wa_dma_chan, dst, src, 4, 0); + if (!tx) { + dev_err(dmm->dev, "Failed to prepare DMA memcpy\n"); + return -EIO; + } + + cookie = tx->tx_submit(tx); + if (dma_submit_error(cookie)) { + dev_err(dmm->dev, "Failed to do DMA tx_submit\n"); + return -EIO; + } + + status = dma_sync_wait(dmm->wa_dma_chan, cookie); + if (status != DMA_COMPLETE) + dev_err(dmm->dev, "i878 wa DMA copy failure\n"); + + dmaengine_terminate_all(dmm->wa_dma_chan); + return 0; +} + +static u32 dmm_read_wa(struct dmm *dmm, u32 reg) +{ + dma_addr_t src, dst; + int r; + + src = dmm->phys_base + reg; + dst = dmm->wa_dma_handle; + + r = dmm_dma_copy(dmm, src, dst); + if (r) { + dev_err(dmm->dev, "sDMA read transfer timeout\n"); + return readl(dmm->base + reg); + } + + /* + * As per i878 workaround, the DMA is used to access the DMM registers. + * Make sure that the readl is not moved by the compiler or the CPU + * earlier than the DMA finished writing the value to memory. + */ + rmb(); + return readl(dmm->wa_dma_data); +} + +static void dmm_write_wa(struct dmm *dmm, u32 val, u32 reg) +{ + dma_addr_t src, dst; + int r; + + writel(val, dmm->wa_dma_data); + /* + * As per i878 workaround, the DMA is used to access the DMM registers. + * Make sure that the writel is not moved by the compiler or the CPU, so + * the data will be in place before we start the DMA to do the actual + * register write. + */ + wmb(); + + src = dmm->wa_dma_handle; + dst = dmm->phys_base + reg; + + r = dmm_dma_copy(dmm, src, dst); + if (r) { + dev_err(dmm->dev, "sDMA write transfer timeout\n"); + writel(val, dmm->base + reg); + } +} + static u32 dmm_read(struct dmm *dmm, u32 reg) { - return readl(dmm->base + reg); + if (dmm->dmm_workaround) { + u32 v; + + mutex_lock(&dmm->wa_lock); + v = dmm_read_wa(dmm, reg); + mutex_unlock(&dmm->wa_lock); + + return v; + } else { + return readl(dmm->base + reg); + } } static void dmm_write(struct dmm *dmm, u32 val, u32 reg) { - writel(val, dmm->base + reg); + if (dmm->dmm_workaround) { + mutex_lock(&dmm->wa_lock); + dmm_write_wa(dmm, val, reg); + mutex_unlock(&dmm->wa_lock); + } else { + writel(val, dmm->base + reg); + } +} + +static int dmm_workaround_init(struct dmm *dmm) +{ + dma_cap_mask_t mask; + + mutex_init(&dmm->wa_lock); + + dmm->wa_dma_data = dma_alloc_coherent(dmm->dev, DMM_REG_SIZE, + &dmm->wa_dma_handle, GFP_KERNEL); + if (!dmm->wa_dma_data) + return -ENOMEM; + + dma_cap_zero(mask); + dma_cap_set(DMA_MEMCPY, mask); + + dmm->wa_dma_chan = dma_request_channel(mask, NULL, NULL); + if (!dmm->wa_dma_chan) { + dma_free_coherent(dmm->dev, DMM_REG_SIZE, dmm->wa_dma_data, + dmm->wa_dma_handle); + return -ENODEV; + } + + return 0; +} + +static void dmm_workaround_uninit(struct dmm *dmm) +{ + dma_release_channel(dmm->wa_dma_chan); + + dma_free_coherent(dmm->dev, DMM_REG_SIZE, dmm->wa_dma_data, + dmm->wa_dma_handle); } /* simple allocator to grab next 16 byte aligned memory from txn */ @@ -634,6 +758,9 @@ static int omap_dmm_remove(struct platform_device *dev) free_irq(omap_dmm->irq, omap_dmm); + if (omap_dmm->dmm_workaround) + dmm_workaround_uninit(omap_dmm); + iounmap(omap_dmm->base); kfree(omap_dmm); omap_dmm = NULL; @@ -679,6 +806,7 @@ static int omap_dmm_probe(struct platform_device *dev) goto fail; } + omap_dmm->phys_base = mem->start; omap_dmm->base = ioremap(mem->start, SZ_2K); if (!omap_dmm->base) { @@ -694,6 +822,17 @@ static int omap_dmm_probe(struct platform_device *dev) omap_dmm->dev = &dev->dev; + if (omap_dmm->plat_data->errata_i878_wa) { + if (!dmm_workaround_init(omap_dmm)) { + omap_dmm->dmm_workaround = true; + dev_info(&dev->dev, + "workaround for errata i878 in use\n"); + } else { + dev_warn(&dev->dev, + "failed to initialize work-around for i878\n"); + } + } + hwinfo = dmm_read(omap_dmm, DMM_PAT_HWINFO); omap_dmm->num_engines = (hwinfo >> 24) & 0x1F; omap_dmm->num_lut = (hwinfo >> 16) & 0x1F; @@ -720,8 +859,13 @@ static int omap_dmm_probe(struct platform_device *dev) dmm_write(omap_dmm, 0x88888888, DMM_TILER_OR__0); dmm_write(omap_dmm, 0x88888888, DMM_TILER_OR__1); - ret = request_irq(omap_dmm->irq, omap_dmm_irq_handler, IRQF_SHARED, - "omap_dmm_irq_handler", omap_dmm); + if (omap_dmm->dmm_workaround) + ret = request_threaded_irq(omap_dmm->irq, NULL, + omap_dmm_irq_handler, IRQF_ONESHOT, + "omap_dmm_irq_handler", omap_dmm); + else + ret = request_irq(omap_dmm->irq, omap_dmm_irq_handler, + IRQF_SHARED, "omap_dmm_irq_handler", omap_dmm); if (ret) { dev_err(&dev->dev, "couldn't register IRQ %d, error %d\n", @@ -1057,6 +1201,11 @@ static const struct dmm_platform_data dmm_omap5_platform_data = { .cpu_cache_flags = OMAP_BO_UNCACHED, }; +static const struct dmm_platform_data dmm_dra7_platform_data = { + .cpu_cache_flags = OMAP_BO_UNCACHED, + .errata_i878_wa = true, +}; + static const struct of_device_id dmm_of_match[] = { { .compatible = "ti,omap4-dmm", @@ -1066,6 +1215,10 @@ static const struct of_device_id dmm_of_match[] = { .compatible = "ti,omap5-dmm", .data = &dmm_omap5_platform_data, }, + { + .compatible = "ti,dra7-dmm", + .data = &dmm_dra7_platform_data, + }, {}, }; #endif