From patchwork Fri Mar 14 07:36:32 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hiroshi DOYU X-Patchwork-Id: 3831161 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id D1806BF540 for ; Fri, 14 Mar 2014 07:37:07 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id BCB3C202F8 for ; Fri, 14 Mar 2014 07:37:06 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 55DB3202E9 for ; Fri, 14 Mar 2014 07:37:05 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WOMfl-0003ju-DV; Fri, 14 Mar 2014 07:36:53 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WOMfi-0002ss-WE; Fri, 14 Mar 2014 07:36:51 +0000 Received: from hqemgate14.nvidia.com ([216.228.121.143]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WOMff-0002sE-Lu for linux-arm-kernel@lists.infradead.org; Fri, 14 Mar 2014 07:36:48 +0000 Received: from hqnvupgp08.nvidia.com (Not Verified[216.228.121.13]) by hqemgate14.nvidia.com id ; Fri, 14 Mar 2014 00:36:44 -0700 Received: from hqemhub02.nvidia.com ([172.20.12.94]) by hqnvupgp08.nvidia.com (PGP Universal service); Fri, 14 Mar 2014 00:32:55 -0700 X-PGP-Universal: processed; by hqnvupgp08.nvidia.com on Fri, 14 Mar 2014 00:32:55 -0700 Received: from HQMAIL107.nvidia.com (172.20.187.13) by hqemhub02.nvidia.com (172.20.150.31) with Microsoft SMTP Server (TLS) id 8.3.327.1; Fri, 14 Mar 2014 00:36:23 -0700 Received: from HQMAIL107.nvidia.com (172.20.187.13) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.775.38; Fri, 14 Mar 2014 07:36:17 +0000 Received: from deemhub01.nvidia.com (10.21.69.137) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.775.38 via Frontend Transport; Fri, 14 Mar 2014 07:36:17 +0000 Received: from localhost (10.21.65.27) by deemhub01.nvidia.com (10.21.69.137) with Microsoft SMTP Server id 8.3.327.1; Fri, 14 Mar 2014 08:36:15 +0100 Date: Fri, 14 Mar 2014 09:36:32 +0200 Message-ID: <20140314.093632.1854857348878394627.hdoyu@nvidia.com> To: John Stultz , Rebecca Schultz Zavin , Colin Cross Subject: [RFC][PATCH 1/1] ion: support multiple address space map From: Hiroshi Doyu X-NVConfidentiality: public X-Mailer: Mew version 6.3 on Emacs 23.3 / Mule 6.0 (HANACHIRUSATO) MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140314_033647_918018_262268C4 X-CRM114-Status: GOOD ( 19.48 ) X-Spam-Score: -1.9 (-) Cc: linaro-mm-sig@lists.linaro.org, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Hi, Our H/Ws(inc. GPU) can accept a dmabuf fd which is originally allocated via ION. GPU tries to map this buffer into GPU address space via ION dmabuf backend(.map_dma_buf). Our Tegra IOMMU supports multiple address space in IOMMU. Basically each device has its own address space assigned by IOMMU. Usually DMA API hides the existance of IOMMU from driver code. To support this multiple IOMMU address space, I think that at least "ion_map_dma_buf" needs to create IOMMU mapping *per* device(address space). And I made the following patch *experimentally*. Is this the right solution to solve this multiple IOMMU address space? Any comment would be really appreciated. ---8<------8<------8<------8<------8<------8<------8<------8<--- From: Hiroshi Doyu Ion doesn't support multiple address space. The recent IOMMU usually provide multiple address spaces, where a mapping depends on an address space. An address space is bind to a device pointer passed from attachment. Those info needs to be stored in a buffer info. If the same buffer needs to be mapped into the same AS, the same mapping info needs to be passed back. Signed-off-by: Hiroshi Doyu --- drivers/staging/android/ion/ion.c | 96 +++++++++++++++++++++++++++++++++- drivers/staging/android/ion/ion_priv.h | 10 ++++ 2 files changed, 104 insertions(+), 2 deletions(-) diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c index 0836717..e260434 100644 --- a/drivers/staging/android/ion/ion.c +++ b/drivers/staging/android/ion/ion.c @@ -872,17 +872,109 @@ static void ion_buffer_sync_for_device(struct ion_buffer *buffer, static struct sg_table *ion_map_dma_buf(struct dma_buf_attachment *attachment, enum dma_data_direction direction) { + int err, i, empty = -1; + struct dma_iommu_mapping *iommu_map; struct dma_buf *dmabuf = attachment->dmabuf; struct ion_buffer *buffer = dmabuf->priv; + unsigned int nents = buffer->sg_table->nents; + struct ion_mapping *map_ptr; + struct scatterlist *sg; + + iommu_map = to_dma_iommu_mapping(attachment->dev); + if (!iommu_map) { + ion_buffer_sync_for_device(buffer, attachment->dev, direction); + return buffer->sg_table; + } + + mutex_lock(&buffer->lock); + for (i = 0; i < ARRAY_SIZE(buffer->mapping); i++) { + map_ptr = &buffer->mapping[i]; + if (!map_ptr->dev) { + empty = i; + continue; + } + + if (to_dma_iommu_mapping(map_ptr->dev) == iommu_map) { + kref_get(&map_ptr->kref); + mutex_unlock(&buffer->lock); + return &map_ptr->sgt; + } + } - ion_buffer_sync_for_device(buffer, attachment->dev, direction); - return buffer->sg_table; + if (!empty) { + err = -ENOMEM; + goto err_no_space; + } + + map_ptr = &buffer->mapping[empty]; + err = sg_alloc_table(&map_ptr->sgt, nents, GFP_KERNEL); + if (err) + goto err_sg_alloc_table; + + for_each_sg(buffer->sg_table->sgl, sg, nents, i) + memcpy(map_ptr->sgt.sgl + i, sg, sizeof(*sg)); + + nents = dma_map_sg(attachment->dev, map_ptr->sgt.sgl, nents, direction); + if (!nents) { + err = -EINVAL; + goto err_dma_map_sg; + } + + kref_init(&map_ptr->kref); + map_ptr->dev = attachment->dev; + mutex_unlock(&buffer->lock); + return &map_ptr->sgt; + +err_dma_map_sg: + sg_free_table(&map_ptr->sgt); +err_sg_alloc_table: +err_no_space: + mutex_unlock(&buffer->lock); + return ERR_PTR(err); +} + +static void __ion_unmap_dma_buf(struct kref *kref) +{ + struct ion_mapping *map_ptr; + + map_ptr = container_of(kref, struct ion_mapping, kref); + dma_unmap_sg(map_ptr->dev, map_ptr->sgt.sgl, map_ptr->sgt.nents, + DMA_BIDIRECTIONAL); + sg_free_table(&map_ptr->sgt); + memset(map_ptr, 0, sizeof(*map_ptr)); } static void ion_unmap_dma_buf(struct dma_buf_attachment *attachment, struct sg_table *table, enum dma_data_direction direction) { + int i; + struct dma_iommu_mapping *iommu_map; + struct dma_buf *dmabuf = attachment->dmabuf; + struct ion_buffer *buffer = dmabuf->priv; + struct ion_mapping *map_ptr; + + iommu_map = to_dma_iommu_mapping(attachment->dev); + if (!iommu_map) + return; + + mutex_lock(&buffer->lock); + for (i = 0; i < ARRAY_SIZE(buffer->mapping); i++) { + map_ptr = &buffer->mapping[i]; + if (!map_ptr->dev) + continue; + + if (to_dma_iommu_mapping(map_ptr->dev) == iommu_map) { + kref_put(&map_ptr->kref, __ion_unmap_dma_buf); + mutex_unlock(&buffer->lock); + return; + } + } + + dev_warn(attachment->dev, "Not found a map(%p)\n", + to_dma_iommu_mapping(attachment->dev)); + + mutex_unlock(&buffer->lock); } void ion_pages_sync_for_device(struct device *dev, struct page *page, diff --git a/drivers/staging/android/ion/ion_priv.h b/drivers/staging/android/ion/ion_priv.h index 1eba3f2..441c251 100644 --- a/drivers/staging/android/ion/ion_priv.h +++ b/drivers/staging/android/ion/ion_priv.h @@ -26,11 +26,19 @@ #include #include #include +#include #include "ion.h" struct ion_buffer *ion_handle_buffer(struct ion_handle *handle); +struct ion_mapping { + struct device *dev; /* to get a map and dma_ops */ + struct sg_table sgt; + struct kref kref; +}; +#define NUM_ION_MAPPING 5 /* FIXME: dynamically allocate more than this */ + /** * struct ion_buffer - metadata for a particular buffer * @ref: refernce count @@ -84,6 +92,8 @@ struct ion_buffer { int handle_count; char task_comm[TASK_COMM_LEN]; pid_t pid; + + struct ion_mapping mapping[NUM_ION_MAPPING]; }; void ion_buffer_destroy(struct ion_buffer *buffer);