From patchwork Thu Apr 13 07:57:17 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sakari Ailus X-Patchwork-Id: 9679195 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 6743560326 for ; Thu, 13 Apr 2017 11:18:17 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5C0142860E for ; Thu, 13 Apr 2017 11:18:17 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 50EA42862E; Thu, 13 Apr 2017 11:18:17 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id ACD462860E for ; Thu, 13 Apr 2017 11:18:16 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3CA3C6E87C; Thu, 13 Apr 2017 11:17:23 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id C088A6E82F for ; Thu, 13 Apr 2017 07:58:35 +0000 (UTC) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 13 Apr 2017 00:58:35 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.37,194,1488873600"; d="scan'208";a="845291457" Received: from paasikivi.fi.intel.com ([10.237.72.42]) by FMSMGA003.fm.intel.com with ESMTP; 13 Apr 2017 00:58:31 -0700 Received: from nauris.fi.intel.com (nauris.localdomain [192.168.240.2]) by paasikivi.fi.intel.com (Postfix) with ESMTP id 75EA121281; Thu, 13 Apr 2017 10:57:55 +0300 (EEST) Received: by nauris.fi.intel.com (Postfix, from userid 1000) id E07EC201A8; Thu, 13 Apr 2017 10:57:20 +0300 (EEST) From: Sakari Ailus To: linux-media@vger.kernel.org Subject: [RFC v3 12/14] vb2: dma-sg: Let drivers decide DMA attrs of MMAP and USERPTR bufs Date: Thu, 13 Apr 2017 10:57:17 +0300 Message-Id: <1492070239-21532-13-git-send-email-sakari.ailus@linux.intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1492070239-21532-1-git-send-email-sakari.ailus@linux.intel.com> References: <1492070239-21532-1-git-send-email-sakari.ailus@linux.intel.com> X-Mailman-Approved-At: Thu, 13 Apr 2017 11:17:04 +0000 Cc: daniel.vetter@ffwll.ch, dri-devel@lists.freedesktop.org, hverkuil@xs4all.nl, kyungmin.park@samsung.com, posciak@chromium.org, m.szyprowski@samsung.com X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP The desirable DMA attributes are not generic for all devices using Videobuf2 scatter-gather DMA ops. Let the drivers decide. As a result, also the DMA-BUF exporter must provide ops for synchronising the cache. This adds begin_cpu_access and end_cpu_access ops to vb2_dc_dmabuf_ops. Signed-off-by: Sakari Ailus --- drivers/media/v4l2-core/videobuf2-dma-sg.c | 81 +++++++++++++++++++++++------- 1 file changed, 62 insertions(+), 19 deletions(-) diff --git a/drivers/media/v4l2-core/videobuf2-dma-sg.c b/drivers/media/v4l2-core/videobuf2-dma-sg.c index 102ddb2..5662f00 100644 --- a/drivers/media/v4l2-core/videobuf2-dma-sg.c +++ b/drivers/media/v4l2-core/videobuf2-dma-sg.c @@ -38,6 +38,7 @@ struct vb2_dma_sg_buf { struct frame_vector *vec; int offset; enum dma_data_direction dma_dir; + unsigned long dma_attrs; struct sg_table sg_table; /* * This will point to sg_table when used with the MMAP or USERPTR @@ -114,6 +115,7 @@ static void *vb2_dma_sg_alloc(struct device *dev, unsigned long dma_attrs, buf->vaddr = NULL; buf->dma_dir = dma_dir; + buf->dma_attrs = dma_attrs; buf->offset = 0; buf->size = size; /* size is already page aligned */ @@ -143,7 +145,7 @@ static void *vb2_dma_sg_alloc(struct device *dev, unsigned long dma_attrs, * prepare() memop is called. */ sgt->nents = dma_map_sg_attrs(buf->dev, sgt->sgl, sgt->orig_nents, - buf->dma_dir, DMA_ATTR_SKIP_CPU_SYNC); + buf->dma_dir, dma_attrs); if (!sgt->nents) goto fail_map; @@ -181,7 +183,7 @@ static void vb2_dma_sg_put(void *buf_priv) dprintk(1, "%s: Freeing buffer of %d pages\n", __func__, buf->num_pages); dma_unmap_sg_attrs(buf->dev, sgt->sgl, sgt->orig_nents, - buf->dma_dir, DMA_ATTR_SKIP_CPU_SYNC); + buf->dma_dir, buf->dma_attrs); if (buf->vaddr) vm_unmap_ram(buf->vaddr, buf->num_pages); sg_free_table(buf->dma_sgt); @@ -198,12 +200,13 @@ static void vb2_dma_sg_prepare(void *buf_priv) struct vb2_dma_sg_buf *buf = buf_priv; struct sg_table *sgt = buf->dma_sgt; - /* DMABUF exporter will flush the cache for us */ - if (buf->db_attach) - return; - - dma_sync_sg_for_device(buf->dev, sgt->sgl, sgt->orig_nents, - buf->dma_dir); + /* + * DMABUF exporter will flush the cache for us; only USERPTR + * and MMAP buffers with non-coherent memory will be flushed. + */ + if (buf->dma_attrs & DMA_ATTR_NON_CONSISTENT) + dma_sync_sg_for_device(buf->dev, sgt->sgl, sgt->orig_nents, + buf->dma_dir); } static void vb2_dma_sg_finish(void *buf_priv) @@ -211,11 +214,13 @@ static void vb2_dma_sg_finish(void *buf_priv) struct vb2_dma_sg_buf *buf = buf_priv; struct sg_table *sgt = buf->dma_sgt; - /* DMABUF exporter will flush the cache for us */ - if (buf->db_attach) - return; - - dma_sync_sg_for_cpu(buf->dev, sgt->sgl, sgt->orig_nents, buf->dma_dir); + /* + * DMABUF exporter will flush the cache for us; only USERPTR + * and MMAP buffers with non-coherent memory will be flushed. + */ + if (buf->dma_attrs & DMA_ATTR_NON_CONSISTENT) + dma_sync_sg_for_cpu(buf->dev, sgt->sgl, sgt->orig_nents, + buf->dma_dir); } static void *vb2_dma_sg_get_userptr(struct device *dev, unsigned long vaddr, @@ -237,6 +242,7 @@ static void *vb2_dma_sg_get_userptr(struct device *dev, unsigned long vaddr, buf->vaddr = NULL; buf->dev = dev; buf->dma_dir = dma_dir; + buf->dma_attrs = dma_attrs; buf->offset = vaddr & ~PAGE_MASK; buf->size = size; buf->dma_sgt = &buf->sg_table; @@ -260,7 +266,7 @@ static void *vb2_dma_sg_get_userptr(struct device *dev, unsigned long vaddr, * prepare() memop is called. */ sgt->nents = dma_map_sg_attrs(buf->dev, sgt->sgl, sgt->orig_nents, - buf->dma_dir, DMA_ATTR_SKIP_CPU_SYNC); + buf->dma_dir, dma_attrs); if (!sgt->nents) goto userptr_fail_map; @@ -288,7 +294,7 @@ static void vb2_dma_sg_put_userptr(void *buf_priv) dprintk(1, "%s: Releasing userspace buffer of %d pages\n", __func__, buf->num_pages); dma_unmap_sg_attrs(buf->dev, sgt->sgl, sgt->orig_nents, buf->dma_dir, - DMA_ATTR_SKIP_CPU_SYNC); + buf->dma_attrs); if (buf->vaddr) vm_unmap_ram(buf->vaddr, buf->num_pages); sg_free_table(buf->dma_sgt); @@ -433,6 +439,7 @@ static struct sg_table *vb2_dma_sg_dmabuf_ops_map( struct dma_buf_attachment *db_attach, enum dma_data_direction dma_dir) { struct vb2_dma_sg_attachment *attach = db_attach->priv; + struct vb2_dma_sg_buf *buf = db_attach->dmabuf->priv; /* stealing dmabuf mutex to serialize map/unmap operations */ struct mutex *lock = &db_attach->dmabuf->lock; struct sg_table *sgt; @@ -448,14 +455,14 @@ static struct sg_table *vb2_dma_sg_dmabuf_ops_map( /* release any previous cache */ if (attach->dma_dir != DMA_NONE) { - dma_unmap_sg(db_attach->dev, sgt->sgl, sgt->orig_nents, - attach->dma_dir); + dma_unmap_sg_attrs(db_attach->dev, sgt->sgl, sgt->orig_nents, + attach->dma_dir, buf->dma_attrs); attach->dma_dir = DMA_NONE; } /* mapping to the client with new direction */ - sgt->nents = dma_map_sg(db_attach->dev, sgt->sgl, sgt->orig_nents, - dma_dir); + sgt->nents = dma_map_sg_attrs(db_attach->dev, sgt->sgl, sgt->orig_nents, + dma_dir, buf->dma_attrs); if (!sgt->nents) { pr_err("failed to map scatterlist\n"); mutex_unlock(lock); @@ -488,6 +495,40 @@ static void *vb2_dma_sg_dmabuf_ops_kmap(struct dma_buf *dbuf, unsigned long pgnu return buf->vaddr ? buf->vaddr + pgnum * PAGE_SIZE : NULL; } +static int vb2_dma_sg_dmabuf_ops_begin_cpu_access( + struct dma_buf *dbuf, enum dma_data_direction direction) +{ + struct vb2_dma_sg_buf *buf = dbuf->priv; + struct sg_table *sgt = buf->dma_sgt; + + /* + * DMABUF exporter will flush the cache for us; only USERPTR + * and MMAP buffers with non-coherent memory will be flushed. + */ + if (buf->dma_attrs & DMA_ATTR_NON_CONSISTENT) + dma_sync_sg_for_cpu(buf->dev, sgt->sgl, sgt->nents, + buf->dma_dir); + + return 0; +} + +static int vb2_dma_sg_dmabuf_ops_end_cpu_access( + struct dma_buf *dbuf, enum dma_data_direction direction) +{ + struct vb2_dma_sg_buf *buf = dbuf->priv; + struct sg_table *sgt = buf->dma_sgt; + + /* + * DMABUF exporter will flush the cache for us; only USERPTR + * and MMAP buffers with non-coherent memory will be flushed. + */ + if (buf->dma_attrs & DMA_ATTR_NON_CONSISTENT) + dma_sync_sg_for_device(buf->dev, sgt->sgl, sgt->nents, + buf->dma_dir); + + return 0; +} + static void *vb2_dma_sg_dmabuf_ops_vmap(struct dma_buf *dbuf) { struct vb2_dma_sg_buf *buf = dbuf->priv; @@ -508,6 +549,8 @@ static struct dma_buf_ops vb2_dma_sg_dmabuf_ops = { .unmap_dma_buf = vb2_dma_sg_dmabuf_ops_unmap, .kmap = vb2_dma_sg_dmabuf_ops_kmap, .kmap_atomic = vb2_dma_sg_dmabuf_ops_kmap, + .begin_cpu_access = vb2_dma_sg_dmabuf_ops_begin_cpu_access, + .end_cpu_access = vb2_dma_sg_dmabuf_ops_end_cpu_access, .vmap = vb2_dma_sg_dmabuf_ops_vmap, .mmap = vb2_dma_sg_dmabuf_ops_mmap, .release = vb2_dma_sg_dmabuf_ops_release,