From patchwork Mon Apr 1 08:56:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxime Ripard X-Patchwork-Id: 10879443 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 052691800 for ; Mon, 1 Apr 2019 08:58:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DFE61285CE for ; Mon, 1 Apr 2019 08:58:01 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D3853286A7; Mon, 1 Apr 2019 08:58:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 6EBBD285CE for ; Mon, 1 Apr 2019 08:58:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=L3DjmoUNXekykzO+C6Ba4jzjUMN+RZACbtzeabURk9k=; b=ie5jpADVgbYhCH EdkmJ8Dqy4EIeeUuNELolSWAbQd/ZqU7GczRrgddXuIbdj0ZdPO11J6Xhi9qtPR+yYH9gJhqEZ21l 9jWw2BI/WSE0E6+PpclTPegIym+l1TRVotn8OnjIpSQUstZWoJu/pJevLuzHDhA/v2BPU1YRl3Sq5 2/x5QxPJD/CAD4xaEIH6nBSSJ1ZQUFLA/bqEhH0yLFaDQhHvQtx2YHl29/R6CRhrvOYSXUdNhu24i lg0h6oAlujYgdKjLq1SiO+H/UnrpS7e4BipIwKU0Cn2rfGFICDwYR9vhF5DrVtOSoZXNS/gQAvOg4 qh8qIopTACDet4ipX0AQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hAslN-00052t-Ss; Mon, 01 Apr 2019 08:57:53 +0000 Received: from relay9-d.mail.gandi.net ([217.70.183.199]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1hAskf-000439-I7 for linux-arm-kernel@lists.infradead.org; Mon, 01 Apr 2019 08:57:14 +0000 X-Originating-IP: 90.88.32.136 Received: from localhost (aaubervilliers-681-1-91-136.w90-88.abo.wanadoo.fr [90.88.32.136]) (Authenticated sender: maxime.ripard@bootlin.com) by relay9-d.mail.gandi.net (Postfix) with ESMTPSA id E4723FF812; Mon, 1 Apr 2019 08:57:04 +0000 (UTC) From: Maxime Ripard To: Mark Rutland , Rob Herring , Frank Rowand , Chen-Yu Tsai , Maxime Ripard Subject: [PATCH v5 5/7] drm/sun4i: Rely on dma interconnect for our RAM offset Date: Mon, 1 Apr 2019 10:56:45 +0200 Message-Id: <5df781318e7e05f780a11ed243dcf2b9fe8a08cb.1554108995.git-series.maxime.ripard@bootlin.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: References: MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190401_015710_138434_D2DA0575 X-CRM114-Status: GOOD ( 18.72 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: devicetree@vger.kernel.org, Thomas Petazzoni , Arnd Bergmann , Daniel Vetter , dri-devel@lists.freedesktop.org, Georgi Djakov , Paul Kocialkowski , Yong Deng , Robin Murphy , Dave Martin , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Now that we can express our DMA topology, rely on those property instead of hardcoding an offset from the dma_addr_t which wasn't really great. We still need to add some code to deal with the old DT that would lack that property, but we move the offset to the DRM device dma_pfn_offset to be able to rely on just the dma_addr_t associated to the GEM object. Acked-by: Daniel Vetter Acked-by: Robin Murphy Signed-off-by: Maxime Ripard --- drivers/gpu/drm/sun4i/sun4i_backend.c | 28 +++++++++++++++++++++------- 1 file changed, 21 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/sun4i/sun4i_backend.c b/drivers/gpu/drm/sun4i/sun4i_backend.c index ee59da4a0172..4e5922c89d7b 100644 --- a/drivers/gpu/drm/sun4i/sun4i_backend.c +++ b/drivers/gpu/drm/sun4i/sun4i_backend.c @@ -361,13 +361,6 @@ int sun4i_backend_update_layer_buffer(struct sun4i_backend *backend, paddr = drm_fb_cma_get_gem_addr(fb, state, 0); DRM_DEBUG_DRIVER("Setting buffer address to %pad\n", &paddr); - /* - * backend DMA accesses DRAM directly, bypassing the system - * bus. As such, the address range is different and the buffer - * address needs to be corrected. - */ - paddr -= PHYS_OFFSET; - if (fb->format->is_yuv) return sun4i_backend_update_yuv_buffer(backend, fb, paddr); @@ -803,6 +796,27 @@ static int sun4i_backend_bind(struct device *dev, struct device *master, dev_set_drvdata(dev, backend); spin_lock_init(&backend->frontend_lock); + if (of_find_property(dev->of_node, "interconnects", NULL)) { + /* + * This assume we have the same DMA constraints for all our the + * devices in our pipeline (all the backends, but also the + * frontends). This sounds bad, but it has always been the case + * for us, and DRM doesn't do per-device allocation either, so + * we would need to fix DRM first... + */ + ret = of_dma_configure(drm->dev, dev->of_node, true); + if (ret) + return ret; + } else { + /* + * If we don't have the interconnect property, most likely + * because of an old DT, we need to set the DMA offset by hand + * on our device since the RAM mapping is at 0 for the DMA bus, + * unlike the CPU. + */ + drm->dev->dma_pfn_offset = PHYS_PFN_OFFSET; + } + backend->engine.node = dev->of_node; backend->engine.ops = &sun4i_backend_engine_ops; backend->engine.id = sun4i_backend_of_get_id(dev->of_node);