From patchwork Thu Mar 2 21:44:42 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laura Abbott X-Patchwork-Id: 9601621 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 72AB560522 for ; Thu, 2 Mar 2017 21:45:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 625872841C for ; Thu, 2 Mar 2017 21:45:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 57037285EF; Thu, 2 Mar 2017 21:45:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.7 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RCVD_IN_SORBS_SPAM autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id E980928428 for ; Thu, 2 Mar 2017 21:45:44 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id BBF186E20F; Thu, 2 Mar 2017 21:45:26 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail-qk0-f169.google.com (mail-qk0-f169.google.com [209.85.220.169]) by gabe.freedesktop.org (Postfix) with ESMTPS id 6C35E6EC6A for ; Thu, 2 Mar 2017 21:45:25 +0000 (UTC) Received: by mail-qk0-f169.google.com with SMTP id s186so146633068qkb.1 for ; Thu, 02 Mar 2017 13:45:25 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=EzE3pK+wuKV/5EaQF42c3tB4S4OMwYF7Ou1YgZuSiiM=; b=N3PuNL3dXrqjipQr06bE3a8o5oPsLtVBSuiPJGwD0Fgq/buT5YbgR5plF6Yk7qUMOd Fq/4Pgdo1DLGkFoqx1D0BmLobZdYjAWULzFsbYfIpYEphQBKnmI2kQJzsA1QQAVHfkAw 7PD41Ttbyod2RGJzc2vwaUdZ7b2tOMfIDxVp/mqn1a/4jIVtQShgsXuO2wbca0evz7ig p/P7LAJGfOOVPZUl3LyzGAzc4jab1ox03t5ZqebIT2mNuG4rv0ubF6RNGK6cMO1IqQx7 SV0ZgJdAC0/DfHjeM+ur5LcWWQLaSTDNhqiNaWzvdoKb+yfg0ge8TUc/6aeI+zvSQgCT m4+A== X-Gm-Message-State: AMke39lIKiGOsg/mgdvIqNgfIMqGqKDoNpeusOEqMD2Ye1WcvcgS/V7scoQ+u2jTiKjkSbwI X-Received: by 10.55.189.130 with SMTP id n124mr19194470qkf.235.1488491124503; Thu, 02 Mar 2017 13:45:24 -0800 (PST) Received: from labbott-redhat-machine.redhat.com ([2601:602:9802:a8dc::5ce7]) by smtp.gmail.com with ESMTPSA id r10sm6261671qte.1.2017.03.02.13.45.21 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 02 Mar 2017 13:45:23 -0800 (PST) From: Laura Abbott To: Sumit Semwal , Riley Andrews , arve@android.com Subject: [RFC PATCH 10/12] staging: android: ion: Use CMA APIs directly Date: Thu, 2 Mar 2017 13:44:42 -0800 Message-Id: <1488491084-17252-11-git-send-email-labbott@redhat.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1488491084-17252-1-git-send-email-labbott@redhat.com> References: <1488491084-17252-1-git-send-email-labbott@redhat.com> Cc: devel@driverdev.osuosl.org, romlem@google.com, Greg Kroah-Hartman , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, linux-mm@kvack.org, Mark Brown , Daniel Vetter , linux-arm-kernel@lists.infradead.org, linux-media@vger.kernel.org X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP When CMA was first introduced, its primary use was for DMA allocation and the only way to get CMA memory was to call dma_alloc_coherent. This put Ion in an awkward position since there was no device structure readily available and setting one up messed up the coherency model. These days, CMA can be allocated directly from the APIs. Switch to using this model to avoid needing a dummy device. This also avoids awkward caching questions. Signed-off-by: Laura Abbott --- drivers/staging/android/ion/ion_cma_heap.c | 97 ++++++++---------------------- 1 file changed, 26 insertions(+), 71 deletions(-) diff --git a/drivers/staging/android/ion/ion_cma_heap.c b/drivers/staging/android/ion/ion_cma_heap.c index d562fd7..6838825 100644 --- a/drivers/staging/android/ion/ion_cma_heap.c +++ b/drivers/staging/android/ion/ion_cma_heap.c @@ -19,24 +19,19 @@ #include #include #include -#include +#include +#include #include "ion.h" #include "ion_priv.h" struct ion_cma_heap { struct ion_heap heap; - struct device *dev; + struct cma *cma; }; #define to_cma_heap(x) container_of(x, struct ion_cma_heap, heap) -struct ion_cma_buffer_info { - void *cpu_addr; - dma_addr_t handle; - struct sg_table *table; -}; - /* ION CMA heap operations functions */ static int ion_cma_allocate(struct ion_heap *heap, struct ion_buffer *buffer, @@ -44,93 +39,53 @@ static int ion_cma_allocate(struct ion_heap *heap, struct ion_buffer *buffer, unsigned long flags) { struct ion_cma_heap *cma_heap = to_cma_heap(heap); - struct device *dev = cma_heap->dev; - struct ion_cma_buffer_info *info; - - dev_dbg(dev, "Request buffer allocation len %ld\n", len); - - if (buffer->flags & ION_FLAG_CACHED) - return -EINVAL; + struct sg_table *table; + struct page *pages; + int ret; - info = kzalloc(sizeof(*info), GFP_KERNEL); - if (!info) + pages = cma_alloc(cma_heap->cma, len, 0); + if (!pages) return -ENOMEM; - info->cpu_addr = dma_alloc_coherent(dev, len, &(info->handle), - GFP_HIGHUSER | __GFP_ZERO); - - if (!info->cpu_addr) { - dev_err(dev, "Fail to allocate buffer\n"); + table = kmalloc(sizeof(struct sg_table), GFP_KERNEL); + if (!table) goto err; - } - info->table = kmalloc(sizeof(*info->table), GFP_KERNEL); - if (!info->table) + ret = sg_alloc_table(table, 1, GFP_KERNEL); + if (ret) goto free_mem; - if (dma_get_sgtable(dev, info->table, info->cpu_addr, info->handle, - len)) - goto free_table; - /* keep this for memory release */ - buffer->priv_virt = info; - buffer->sg_table = info->table; - dev_dbg(dev, "Allocate buffer %p\n", buffer); + sg_set_page(table->sgl, pages, len, 0); + + buffer->priv_virt = pages; + buffer->sg_table = table; return 0; -free_table: - kfree(info->table); free_mem: - dma_free_coherent(dev, len, info->cpu_addr, info->handle); + kfree(table); err: - kfree(info); + cma_release(cma_heap->cma, pages, buffer->size); return -ENOMEM; } static void ion_cma_free(struct ion_buffer *buffer) { struct ion_cma_heap *cma_heap = to_cma_heap(buffer->heap); - struct device *dev = cma_heap->dev; - struct ion_cma_buffer_info *info = buffer->priv_virt; + struct page *pages = buffer->priv_virt; - dev_dbg(dev, "Release buffer %p\n", buffer); /* release memory */ - dma_free_coherent(dev, buffer->size, info->cpu_addr, info->handle); + cma_release(cma_heap->cma, pages, buffer->size); /* release sg table */ - sg_free_table(info->table); - kfree(info->table); - kfree(info); -} - -static int ion_cma_mmap(struct ion_heap *mapper, struct ion_buffer *buffer, - struct vm_area_struct *vma) -{ - struct ion_cma_heap *cma_heap = to_cma_heap(buffer->heap); - struct device *dev = cma_heap->dev; - struct ion_cma_buffer_info *info = buffer->priv_virt; - - return dma_mmap_coherent(dev, vma, info->cpu_addr, info->handle, - buffer->size); -} - -static void *ion_cma_map_kernel(struct ion_heap *heap, - struct ion_buffer *buffer) -{ - struct ion_cma_buffer_info *info = buffer->priv_virt; - /* kernel memory mapping has been done at allocation time */ - return info->cpu_addr; -} - -static void ion_cma_unmap_kernel(struct ion_heap *heap, - struct ion_buffer *buffer) -{ + sg_free_table(buffer->sg_table); + kfree(buffer->sg_table); } static struct ion_heap_ops ion_cma_ops = { .allocate = ion_cma_allocate, .free = ion_cma_free, - .map_user = ion_cma_mmap, - .map_kernel = ion_cma_map_kernel, - .unmap_kernel = ion_cma_unmap_kernel, + .map_user = ion_heap_map_user, + .map_kernel = ion_heap_map_kernel, + .unmap_kernel = ion_heap_unmap_kernel, }; struct ion_heap *ion_cma_heap_create(struct ion_platform_heap *data) @@ -147,7 +102,7 @@ struct ion_heap *ion_cma_heap_create(struct ion_platform_heap *data) * get device from private heaps data, later it will be * used to make the link with reserved CMA memory */ - cma_heap->dev = data->priv; + cma_heap->cma = data->priv; cma_heap->heap.type = ION_HEAP_TYPE_DMA; return &cma_heap->heap; }