From patchwork Fri Oct 11 12:24:37 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arto Merilainen X-Patchwork-Id: 3023971 Return-Path: X-Original-To: patchwork-dri-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 8F4A6BF924 for ; Fri, 11 Oct 2013 12:29:07 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 653ED2030D for ; Fri, 11 Oct 2013 12:29:06 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 49C07202C0 for ; Fri, 11 Oct 2013 12:29:05 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id F0610E7F6B for ; Fri, 11 Oct 2013 05:29:04 -0700 (PDT) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from hqemgate16.nvidia.com (hqemgate16.nvidia.com [216.228.121.65]) by gabe.freedesktop.org (Postfix) with ESMTP id ED901E5CDA for ; Fri, 11 Oct 2013 05:28:47 -0700 (PDT) Received: from hqnvupgp07.nvidia.com (Not Verified[216.228.121.13]) by hqemgate16.nvidia.com id ; Fri, 11 Oct 2013 05:28:47 -0700 Received: from hqemhub03.nvidia.com ([172.20.12.94]) by hqnvupgp07.nvidia.com (PGP Universal service); Fri, 11 Oct 2013 05:28:47 -0700 X-PGP-Universal: processed; by hqnvupgp07.nvidia.com on Fri, 11 Oct 2013 05:28:47 -0700 Received: from amerilainen-lnx.Nvidia.com (172.20.144.16) by hqemhub03.nvidia.com (172.20.150.15) with Microsoft SMTP Server (TLS) id 8.3.327.1; Fri, 11 Oct 2013 05:28:47 -0700 From: Arto Merilainen To: , Subject: [PATCH] drm/tegra: Use dma_mapping API in mmap Date: Fri, 11 Oct 2013 15:24:37 +0300 Message-ID: <1381494277-25542-1-git-send-email-amerilainen@nvidia.com> X-Mailer: git-send-email 1.8.1.5 X-NVConfidentiality: public MIME-Version: 1.0 Cc: linux-tegra@vger.kernel.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, amerilainen@nvidia.com X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dri-devel-bounces+patchwork-dri-devel=patchwork.kernel.org@lists.freedesktop.org Errors-To: dri-devel-bounces+patchwork-dri-devel=patchwork.kernel.org@lists.freedesktop.org X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This far we have used the remap_pfn_range() function directly to map buffers to user space. Calling this function has worked as all memory allocations have been contiguous. However, the function must support also non-contiguous memory allocations as we later want to turn on IOMMU. This patch modifies the code to use dma_mapping API for mapping buffers to user space. Signed-off-by: Arto Merilainen --- I tested this patch on cardhu using Hiroshi Doyu's series "Unified SMMU driver among Tegra SoCs" and using just pure Linux 3.12rc4. I have not tested this on T20 so I would appreaciate help in here (although I do not think this should affect the behavior on T20 at all). I also would like to hear suggestions on better approaches for using dma_mmap_writecombine(). If IOMMU is not used, this function call ends up calling arm_iommu_mmap() which assumes that vm_pgoff value is valid. I see that other drm drivers simply implement fault callbacks and avoid this problem completely. drivers/gpu/host1x/drm/gem.c | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/host1x/drm/gem.c b/drivers/gpu/host1x/drm/gem.c index 59623de..82ae3d5 100644 --- a/drivers/gpu/host1x/drm/gem.c +++ b/drivers/gpu/host1x/drm/gem.c @@ -240,6 +240,7 @@ int tegra_drm_mmap(struct file *file, struct vm_area_struct *vma) { struct drm_gem_object *gem; struct tegra_bo *bo; + unsigned long vm_pgoff; int ret; ret = drm_gem_mmap(file, vma); @@ -249,8 +250,19 @@ int tegra_drm_mmap(struct file *file, struct vm_area_struct *vma) gem = vma->vm_private_data; bo = to_tegra_bo(gem); - ret = remap_pfn_range(vma, vma->vm_start, bo->paddr >> PAGE_SHIFT, - vma->vm_end - vma->vm_start, vma->vm_page_prot); + /* the pages are real */ + vma->vm_flags &= ~VM_PFNMAP; + + /* drm holds a fake offset in vm_pgoff... dma_mapping assumes that + * vm_pgoff contains data related to the buffer => clear the cookie + * temporarily. */ + + vm_pgoff = vma->vm_pgoff; + vma->vm_pgoff = 0; + ret = dma_mmap_writecombine(bo->gem.dev->dev, vma, bo->vaddr, + bo->paddr, bo->gem.size); + vma->vm_pgoff = vm_pgoff; + if (ret) drm_gem_vm_close(vma);