From patchwork Fri Sep 6 14:17:14 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 11135347 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 99C74924 for ; Fri, 6 Sep 2019 14:17:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 79B32214E0 for ; Fri, 6 Sep 2019 14:17:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="dNTsJlMU" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2394124AbfIFORB (ORCPT ); Fri, 6 Sep 2019 10:17:01 -0400 Received: from fllv0015.ext.ti.com ([198.47.19.141]:33784 "EHLO fllv0015.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729017AbfIFORA (ORCPT ); Fri, 6 Sep 2019 10:17:00 -0400 Received: from fllv0035.itg.ti.com ([10.64.41.0]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id x86EGvd7033627; Fri, 6 Sep 2019 09:16:57 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1567779417; bh=+L55aTwWGyUbvaaXfZF0uAs4AUymc3puZPlhVMgEnt8=; h=From:To:CC:Subject:Date; b=dNTsJlMUUD6VqosfnnTAlbfQHS4eZt7K2vopJ1oJAmBClpbH/cYWGLr/1m/yYmL/e JkpvlK1hcNLgPkthtNnB8F3QOpxRUum/DFgDTlxXLwR3AloROK7IZ91AYFJLroVoxd dJbKTm9wjQaUjJbohIwJ2MhJkdAiAIzqfnNu/LGs= Received: from DLEE106.ent.ti.com (dlee106.ent.ti.com [157.170.170.36]) by fllv0035.itg.ti.com (8.15.2/8.15.2) with ESMTP id x86EGvha127778; Fri, 6 Sep 2019 09:16:57 -0500 Received: from DLEE107.ent.ti.com (157.170.170.37) by DLEE106.ent.ti.com (157.170.170.36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1713.5; Fri, 6 Sep 2019 09:16:51 -0500 Received: from fllv0040.itg.ti.com (10.64.41.20) by DLEE107.ent.ti.com (157.170.170.37) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1713.5 via Frontend Transport; Fri, 6 Sep 2019 09:16:51 -0500 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by fllv0040.itg.ti.com (8.15.2/8.15.2) with ESMTP id x86EGmac042400; Fri, 6 Sep 2019 09:16:49 -0500 From: Peter Ujfalusi To: , CC: , , , Subject: [RFC 0/3] dmaengine: Support for DMA domain controllers Date: Fri, 6 Sep 2019 17:17:14 +0300 Message-ID: <20190906141717.23859-1-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.23.0 MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org Hi, More and more SoC have more than one DMA controller integrated. If a device needs none slave DMA channel for operation (block copy from/to memory mapped regions for example) at the moment when they request a channel it is going to be taken from the first DMA controller which was registered, but this might be not optimal for the device. For example on AM654 we have two DMAs: main_udmap and mcu_udmap. DDR to DDR memcpy is twice as fast on main_udmap compared to mcu_udmap, while devices on MCU domain (OSPI for example) are more than twice as fast on mcu_udmap than with main_udmap. Because of probing order (mcu_udmap is probing first) modules would use mcu_udmap instead of the better main_udmap. Currently the only solution is to make a choice and disable the MEM_TO_MEM functionality on one of them which is not a great solution. With the introduction of DMA domain controllers we can utilize the best DMA controller for the job around the SoC without the need to degrade performance. If the dma-domain-controller is not present in DT or booted w/o DT the none slave channel request will work as it does today. The last patch introduces a new dma_domain_request_chan_by_mask() function and I have a define for dma_request_chan_by_mask() to avoid breaking users of the dma_request_chan_by_mask, but looking at the kernel we have small amount of users: drivers/gpu/drm/vc4/vc4_dsi.c drivers/media/platform/omap/omap_vout_vrfb.c drivers/media/platform/omap3isp/isphist.c drivers/mtd/spi-nor/cadence-quadspi.c drivers/spi/spi-ti-qspi.c If it is acceptable we can modify the parameters of dma_request_chan_by_mask() to include ther device pointer and at the same time change all of the clients by giving NULL or in case of the last two their dev. Regards, Peter --- Peter Ujfalusi (3): dt-bindings: dma: Add documentation for DMA domains dmaengine: of_dma: Function to look up the DMA domain of a client dmaengine: Support for requesting channels preferring DMA domain controller .../devicetree/bindings/dma/dma-domain.yaml | 59 +++++++++++++++++++ drivers/dma/dmaengine.c | 17 ++++-- drivers/dma/of-dma.c | 42 +++++++++++++ include/linux/dmaengine.h | 9 ++- include/linux/of_dma.h | 7 +++ 5 files changed, 126 insertions(+), 8 deletions(-) create mode 100644 Documentation/devicetree/bindings/dma/dma-domain.yaml