From patchwork Wed Dec 19 14:45:37 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lucas Stach X-Patchwork-Id: 10737377 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id ECBD913B5 for ; Wed, 19 Dec 2018 14:46:22 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DC45B296AD for ; Wed, 19 Dec 2018 14:46:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CCD4729D39; Wed, 19 Dec 2018 14:46:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 56732296AD for ; Wed, 19 Dec 2018 14:46:22 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C86236F006; Wed, 19 Dec 2018 14:46:19 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from metis.ext.pengutronix.de (metis.ext.pengutronix.de [IPv6:2001:67c:670:201:290:27ff:fe1d:cc33]) by gabe.freedesktop.org (Postfix) with ESMTPS id BF4FF6EFF4 for ; Wed, 19 Dec 2018 14:45:50 +0000 (UTC) Received: from dude.hi.pengutronix.de ([2001:67c:670:100:1d::7] helo=dude.pengutronix.de.) by metis.ext.pengutronix.de with esmtp (Exim 4.89) (envelope-from ) id 1gZd6Y-0006y1-VC; Wed, 19 Dec 2018 15:45:47 +0100 From: Lucas Stach To: etnaviv@lists.freedesktop.org Subject: [PATCH 01/10] drm/etnaviv: move job context pointer to etnaviv_gem_submit Date: Wed, 19 Dec 2018 15:45:37 +0100 Message-Id: <20181219144546.28224-2-l.stach@pengutronix.de> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181219144546.28224-1-l.stach@pengutronix.de> References: <20181219144546.28224-1-l.stach@pengutronix.de> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 2001:67c:670:100:1d::7 X-SA-Exim-Mail-From: l.stach@pengutronix.de X-SA-Exim-Scanned: No (on metis.ext.pengutronix.de); SAEximRunCond expanded to false X-PTX-Original-Recipient: dri-devel@lists.freedesktop.org X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: patchwork-lst@pengutronix.de, kernel@pengutronix.de, dri-devel@lists.freedesktop.org, Russell King Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP The context isn't really related to the cmdbuf, but is a property of the job. This has been missed when moving to a properly refcounted etnaviv_gem_submit. Signed-off-by: Lucas Stach Reviewed-by: Christian Gmeiner --- drivers/gpu/drm/etnaviv/etnaviv_cmdbuf.h | 2 -- drivers/gpu/drm/etnaviv/etnaviv_gem.h | 1 + drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c | 2 +- drivers/gpu/drm/etnaviv/etnaviv_sched.c | 2 +- 4 files changed, 3 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/etnaviv/etnaviv_cmdbuf.h b/drivers/gpu/drm/etnaviv/etnaviv_cmdbuf.h index acb68c698363..4d5d1a77eb2a 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_cmdbuf.h +++ b/drivers/gpu/drm/etnaviv/etnaviv_cmdbuf.h @@ -15,8 +15,6 @@ struct etnaviv_perfmon_request; struct etnaviv_cmdbuf { /* suballocator this cmdbuf is allocated from */ struct etnaviv_cmdbuf_suballoc *suballoc; - /* user context key, must be unique between all active users */ - struct etnaviv_file_private *ctx; /* cmdbuf properties */ int suballoc_offset; void *vaddr; diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.h b/drivers/gpu/drm/etnaviv/etnaviv_gem.h index 76079c2291f8..f0abb744ef95 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.h +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.h @@ -95,6 +95,7 @@ struct etnaviv_gem_submit_bo { struct etnaviv_gem_submit { struct drm_sched_job sched_job; struct kref refcount; + struct etnaviv_file_private *ctx; struct etnaviv_gpu *gpu; struct dma_fence *out_fence, *in_fence; int out_fence_id; diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c index 983e67f19e45..86c005ab0c91 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c @@ -506,7 +506,7 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data, if (ret) goto err_submit_objects; - submit->cmdbuf.ctx = file->driver_priv; + submit->ctx = file->driver_priv; submit->exec_state = args->exec_state; submit->flags = args->flags; diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c b/drivers/gpu/drm/etnaviv/etnaviv_sched.c index e5a9fae31ab7..1bb076f8b2b7 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c @@ -150,7 +150,7 @@ int etnaviv_sched_push_job(struct drm_sched_entity *sched_entity, mutex_lock(&submit->gpu->fence_lock); ret = drm_sched_job_init(&submit->sched_job, sched_entity, - submit->cmdbuf.ctx); + submit->ctx); if (ret) goto out_unlock; From patchwork Wed Dec 19 14:45:38 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Lucas Stach X-Patchwork-Id: 10737375 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4239B746 for ; Wed, 19 Dec 2018 14:46:21 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3160029997 for ; Wed, 19 Dec 2018 14:46:21 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 25DFA29D39; Wed, 19 Dec 2018 14:46:21 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D82322969C for ; Wed, 19 Dec 2018 14:46:20 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 554A36F005; Wed, 19 Dec 2018 14:46:19 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from metis.ext.pengutronix.de (metis.ext.pengutronix.de [IPv6:2001:67c:670:201:290:27ff:fe1d:cc33]) by gabe.freedesktop.org (Postfix) with ESMTPS id C4E206EFFA for ; Wed, 19 Dec 2018 14:45:50 +0000 (UTC) Received: from dude.hi.pengutronix.de ([2001:67c:670:100:1d::7] helo=dude.pengutronix.de.) by metis.ext.pengutronix.de with esmtp (Exim 4.89) (envelope-from ) id 1gZd6Z-0006y1-16; Wed, 19 Dec 2018 15:45:47 +0100 From: Lucas Stach To: etnaviv@lists.freedesktop.org Subject: [PATCH 02/10] drm/etnaviv: mmuv2: don't map zero page Date: Wed, 19 Dec 2018 15:45:38 +0100 Message-Id: <20181219144546.28224-3-l.stach@pengutronix.de> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181219144546.28224-1-l.stach@pengutronix.de> References: <20181219144546.28224-1-l.stach@pengutronix.de> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 2001:67c:670:100:1d::7 X-SA-Exim-Mail-From: l.stach@pengutronix.de X-SA-Exim-Scanned: No (on metis.ext.pengutronix.de); SAEximRunCond expanded to false X-PTX-Original-Recipient: dri-devel@lists.freedesktop.org X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: patchwork-lst@pengutronix.de, kernel@pengutronix.de, dri-devel@lists.freedesktop.org, Russell King Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP Keep the page at address 0 as faulting to catch any potential state setup issues early. Signed-off-by: Lucas Stach Reviewed-by: Christian Gmeiner Reviewed-By: Guido Günther --- drivers/gpu/drm/etnaviv/etnaviv_iommu_v2.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/etnaviv/etnaviv_iommu_v2.c b/drivers/gpu/drm/etnaviv/etnaviv_iommu_v2.c index f1c88d8ad5ba..f794e04be9e6 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_iommu_v2.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_iommu_v2.c @@ -320,8 +320,8 @@ etnaviv_iommuv2_domain_alloc(struct etnaviv_gpu *gpu) domain = &etnaviv_domain->base; domain->dev = gpu->dev; - domain->base = 0; - domain->size = (u64)SZ_1G * 4; + domain->base = SZ_4K; + domain->size = (u64)SZ_1G * 4 - SZ_4K; domain->ops = &etnaviv_iommuv2_ops; ret = etnaviv_iommuv2_init(etnaviv_domain); From patchwork Wed Dec 19 14:45:39 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lucas Stach X-Patchwork-Id: 10737381 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 80559746 for ; Wed, 19 Dec 2018 14:46:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 704A2296AD for ; Wed, 19 Dec 2018 14:46:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 643F529D39; Wed, 19 Dec 2018 14:46:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 8CA8129997 for ; Wed, 19 Dec 2018 14:46:25 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 736A96F007; Wed, 19 Dec 2018 14:46:21 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from metis.ext.pengutronix.de (metis.ext.pengutronix.de [IPv6:2001:67c:670:201:290:27ff:fe1d:cc33]) by gabe.freedesktop.org (Postfix) with ESMTPS id EFC296EFFD for ; Wed, 19 Dec 2018 14:45:50 +0000 (UTC) Received: from dude.hi.pengutronix.de ([2001:67c:670:100:1d::7] helo=dude.pengutronix.de.) by metis.ext.pengutronix.de with esmtp (Exim 4.89) (envelope-from ) id 1gZd6Z-0006y1-3N; Wed, 19 Dec 2018 15:45:47 +0100 From: Lucas Stach To: etnaviv@lists.freedesktop.org Subject: [PATCH 03/10] drm/etnaviv: split out cmdbuf mapping into address space Date: Wed, 19 Dec 2018 15:45:39 +0100 Message-Id: <20181219144546.28224-4-l.stach@pengutronix.de> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181219144546.28224-1-l.stach@pengutronix.de> References: <20181219144546.28224-1-l.stach@pengutronix.de> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 2001:67c:670:100:1d::7 X-SA-Exim-Mail-From: l.stach@pengutronix.de X-SA-Exim-Scanned: No (on metis.ext.pengutronix.de); SAEximRunCond expanded to false X-PTX-Original-Recipient: dri-devel@lists.freedesktop.org X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: patchwork-lst@pengutronix.de, kernel@pengutronix.de, dri-devel@lists.freedesktop.org, Russell King Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP This allows to decouple the cmdbuf suballocator create and mapping the region into the GPU address space. Allowing multiple AS to share a single cmdbuf suballoc. Signed-off-by: Lucas Stach --- drivers/gpu/drm/etnaviv/etnaviv_buffer.c | 23 ++++--- drivers/gpu/drm/etnaviv/etnaviv_cmdbuf.c | 42 +++++++------ drivers/gpu/drm/etnaviv/etnaviv_cmdbuf.h | 12 +++- drivers/gpu/drm/etnaviv/etnaviv_dump.c | 6 +- drivers/gpu/drm/etnaviv/etnaviv_gpu.c | 26 ++++++-- drivers/gpu/drm/etnaviv/etnaviv_gpu.h | 3 +- drivers/gpu/drm/etnaviv/etnaviv_mmu.c | 78 +++++++++++++----------- drivers/gpu/drm/etnaviv/etnaviv_mmu.h | 12 ++-- 8 files changed, 123 insertions(+), 79 deletions(-) diff --git a/drivers/gpu/drm/etnaviv/etnaviv_buffer.c b/drivers/gpu/drm/etnaviv/etnaviv_buffer.c index 160ce3c060a5..401adf905d95 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_buffer.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_buffer.c @@ -116,7 +116,8 @@ static void etnaviv_buffer_dump(struct etnaviv_gpu *gpu, u32 *ptr = buf->vaddr + off; dev_info(gpu->dev, "virt %p phys 0x%08x free 0x%08x\n", - ptr, etnaviv_cmdbuf_get_va(buf) + off, size - len * 4 - off); + ptr, etnaviv_cmdbuf_get_va(buf, &gpu->cmdbuf_mapping) + + off, size - len * 4 - off); print_hex_dump(KERN_INFO, "cmd ", DUMP_PREFIX_OFFSET, 16, 4, ptr, len * 4, 0); @@ -149,7 +150,8 @@ static u32 etnaviv_buffer_reserve(struct etnaviv_gpu *gpu, if (buffer->user_size + cmd_dwords * sizeof(u64) > buffer->size) buffer->user_size = 0; - return etnaviv_cmdbuf_get_va(buffer) + buffer->user_size; + return etnaviv_cmdbuf_get_va(buffer, &gpu->cmdbuf_mapping) + + buffer->user_size; } u16 etnaviv_buffer_init(struct etnaviv_gpu *gpu) @@ -162,8 +164,8 @@ u16 etnaviv_buffer_init(struct etnaviv_gpu *gpu) buffer->user_size = 0; CMD_WAIT(buffer); - CMD_LINK(buffer, 2, etnaviv_cmdbuf_get_va(buffer) + - buffer->user_size - 4); + CMD_LINK(buffer, 2, etnaviv_cmdbuf_get_va(buffer, &gpu->cmdbuf_mapping) + + buffer->user_size - 4); return buffer->user_size / 8; } @@ -289,8 +291,8 @@ void etnaviv_sync_point_queue(struct etnaviv_gpu *gpu, unsigned int event) /* Append waitlink */ CMD_WAIT(buffer); - CMD_LINK(buffer, 2, etnaviv_cmdbuf_get_va(buffer) + - buffer->user_size - 4); + CMD_LINK(buffer, 2, etnaviv_cmdbuf_get_va(buffer, &gpu->cmdbuf_mapping) + + buffer->user_size - 4); /* * Kick off the 'sync point' command by replacing the previous @@ -317,7 +319,7 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, u32 exec_state, if (drm_debug & DRM_UT_DRIVER) etnaviv_buffer_dump(gpu, buffer, 0, 0x50); - link_target = etnaviv_cmdbuf_get_va(cmdbuf); + link_target = etnaviv_cmdbuf_get_va(cmdbuf, &gpu->cmdbuf_mapping); link_dwords = cmdbuf->size / 8; /* @@ -410,12 +412,13 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, u32 exec_state, CMD_LOAD_STATE(buffer, VIVS_GL_EVENT, VIVS_GL_EVENT_EVENT_ID(event) | VIVS_GL_EVENT_FROM_PE); CMD_WAIT(buffer); - CMD_LINK(buffer, 2, etnaviv_cmdbuf_get_va(buffer) + - buffer->user_size - 4); + CMD_LINK(buffer, 2, etnaviv_cmdbuf_get_va(buffer, &gpu->cmdbuf_mapping) + + buffer->user_size - 4); if (drm_debug & DRM_UT_DRIVER) pr_info("stream link to 0x%08x @ 0x%08x %p\n", - return_target, etnaviv_cmdbuf_get_va(cmdbuf), + return_target, + etnaviv_cmdbuf_get_va(cmdbuf, &gpu->cmdbuf_mapping), cmdbuf->vaddr); if (drm_debug & DRM_UT_DRIVER) { diff --git a/drivers/gpu/drm/etnaviv/etnaviv_cmdbuf.c b/drivers/gpu/drm/etnaviv/etnaviv_cmdbuf.c index a3c44f145c1d..1bc529399783 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_cmdbuf.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_cmdbuf.c @@ -6,9 +6,9 @@ #include #include "etnaviv_cmdbuf.h" +#include "etnaviv_gem.h" #include "etnaviv_gpu.h" #include "etnaviv_mmu.h" -#include "etnaviv_perfmon.h" #define SUBALLOC_SIZE SZ_256K #define SUBALLOC_GRANULE SZ_4K @@ -20,10 +20,6 @@ struct etnaviv_cmdbuf_suballoc { void *vaddr; dma_addr_t paddr; - /* GPU mapping */ - u32 iova; - struct drm_mm_node vram_node; /* only used on MMUv2 */ - /* allocation management */ struct mutex lock; DECLARE_BITMAP(granule_map, SUBALLOC_GRANULES); @@ -47,29 +43,36 @@ etnaviv_cmdbuf_suballoc_new(struct etnaviv_gpu * gpu) suballoc->vaddr = dma_alloc_wc(gpu->dev, SUBALLOC_SIZE, &suballoc->paddr, GFP_KERNEL); - if (!suballoc->vaddr) + if (!suballoc->vaddr) { + ret = -ENOMEM; goto free_suballoc; - - ret = etnaviv_iommu_get_suballoc_va(gpu, suballoc->paddr, - &suballoc->vram_node, SUBALLOC_SIZE, - &suballoc->iova); - if (ret) - goto free_dma; + } return suballoc; -free_dma: - dma_free_wc(gpu->dev, SUBALLOC_SIZE, suballoc->vaddr, suballoc->paddr); free_suballoc: kfree(suballoc); - return NULL; + return ERR_PTR(ret); +} + +int etnaviv_cmdbuf_suballoc_map(struct etnaviv_cmdbuf_suballoc *suballoc, + struct etnaviv_iommu *mmu, + struct etnaviv_vram_mapping *mapping, + u32 memory_base) +{ + return etnaviv_iommu_get_suballoc_va(mmu, mapping, memory_base, + suballoc->paddr, SUBALLOC_SIZE); +} + +void etnaviv_cmdbuf_suballoc_unmap(struct etnaviv_iommu *mmu, + struct etnaviv_vram_mapping *mapping) +{ + etnaviv_iommu_put_suballoc_va(mmu, mapping); } void etnaviv_cmdbuf_suballoc_destroy(struct etnaviv_cmdbuf_suballoc *suballoc) { - etnaviv_iommu_put_suballoc_va(suballoc->gpu, &suballoc->vram_node, - SUBALLOC_SIZE, suballoc->iova); dma_free_wc(suballoc->gpu->dev, SUBALLOC_SIZE, suballoc->vaddr, suballoc->paddr); kfree(suballoc); @@ -123,9 +126,10 @@ void etnaviv_cmdbuf_free(struct etnaviv_cmdbuf *cmdbuf) wake_up_all(&suballoc->free_event); } -u32 etnaviv_cmdbuf_get_va(struct etnaviv_cmdbuf *buf) +u32 etnaviv_cmdbuf_get_va(struct etnaviv_cmdbuf *buf, + struct etnaviv_vram_mapping *mapping) { - return buf->suballoc->iova + buf->suballoc_offset; + return mapping->iova + buf->suballoc_offset; } dma_addr_t etnaviv_cmdbuf_get_pa(struct etnaviv_cmdbuf *buf) diff --git a/drivers/gpu/drm/etnaviv/etnaviv_cmdbuf.h b/drivers/gpu/drm/etnaviv/etnaviv_cmdbuf.h index 4d5d1a77eb2a..11d95f05c017 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_cmdbuf.h +++ b/drivers/gpu/drm/etnaviv/etnaviv_cmdbuf.h @@ -9,8 +9,9 @@ #include struct etnaviv_gpu; +struct etnaviv_iommu; +struct etnaviv_vram_mapping; struct etnaviv_cmdbuf_suballoc; -struct etnaviv_perfmon_request; struct etnaviv_cmdbuf { /* suballocator this cmdbuf is allocated from */ @@ -25,13 +26,20 @@ struct etnaviv_cmdbuf { struct etnaviv_cmdbuf_suballoc * etnaviv_cmdbuf_suballoc_new(struct etnaviv_gpu * gpu); void etnaviv_cmdbuf_suballoc_destroy(struct etnaviv_cmdbuf_suballoc *suballoc); +int etnaviv_cmdbuf_suballoc_map(struct etnaviv_cmdbuf_suballoc *suballoc, + struct etnaviv_iommu *mmu, + struct etnaviv_vram_mapping *mapping, + u32 memory_base); +void etnaviv_cmdbuf_suballoc_unmap(struct etnaviv_iommu *mmu, + struct etnaviv_vram_mapping *mapping); int etnaviv_cmdbuf_init(struct etnaviv_cmdbuf_suballoc *suballoc, struct etnaviv_cmdbuf *cmdbuf, u32 size); void etnaviv_cmdbuf_free(struct etnaviv_cmdbuf *cmdbuf); -u32 etnaviv_cmdbuf_get_va(struct etnaviv_cmdbuf *buf); +u32 etnaviv_cmdbuf_get_va(struct etnaviv_cmdbuf *buf, + struct etnaviv_vram_mapping *mapping); dma_addr_t etnaviv_cmdbuf_get_pa(struct etnaviv_cmdbuf *buf); #endif /* __ETNAVIV_CMDBUF_H__ */ diff --git a/drivers/gpu/drm/etnaviv/etnaviv_dump.c b/drivers/gpu/drm/etnaviv/etnaviv_dump.c index 9146e30e24a6..b4e02063b802 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_dump.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_dump.c @@ -180,14 +180,16 @@ void etnaviv_core_dump(struct etnaviv_gpu *gpu) etnaviv_core_dump_mmu(&iter, gpu, mmu_size); etnaviv_core_dump_mem(&iter, ETDUMP_BUF_RING, gpu->buffer.vaddr, gpu->buffer.size, - etnaviv_cmdbuf_get_va(&gpu->buffer)); + etnaviv_cmdbuf_get_va(&gpu->buffer, + &gpu->cmdbuf_mapping)); spin_lock(&gpu->sched.job_list_lock); list_for_each_entry(s_job, &gpu->sched.ring_mirror_list, node) { submit = to_etnaviv_submit(s_job); etnaviv_core_dump_mem(&iter, ETDUMP_BUF_CMD, submit->cmdbuf.vaddr, submit->cmdbuf.size, - etnaviv_cmdbuf_get_va(&submit->cmdbuf)); + etnaviv_cmdbuf_get_va(&submit->cmdbuf, + &gpu->cmdbuf_mapping)); } spin_unlock(&gpu->sched.job_list_lock); diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c index 6904535475de..1b11e6c393a0 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c @@ -685,8 +685,8 @@ static void etnaviv_gpu_hw_init(struct etnaviv_gpu *gpu) prefetch = etnaviv_buffer_init(gpu); gpu_write(gpu, VIVS_HI_INTR_ENBL, ~0U); - etnaviv_gpu_start_fe(gpu, etnaviv_cmdbuf_get_va(&gpu->buffer), - prefetch); + etnaviv_gpu_start_fe(gpu, etnaviv_cmdbuf_get_va(&gpu->buffer, + &gpu->cmdbuf_mapping), prefetch); } int etnaviv_gpu_init(struct etnaviv_gpu *gpu) @@ -762,7 +762,15 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu) if (IS_ERR(gpu->cmdbuf_suballoc)) { dev_err(gpu->dev, "Failed to create cmdbuf suballocator\n"); ret = PTR_ERR(gpu->cmdbuf_suballoc); - goto fail; + goto destroy_iommu; + } + + ret = etnaviv_cmdbuf_suballoc_map(gpu->cmdbuf_suballoc, gpu->mmu, + &gpu->cmdbuf_mapping, + gpu->memory_base); + if (ret) { + dev_err(gpu->dev, "failed to map cmdbuf suballoc\n"); + goto destroy_suballoc; } /* Create buffer: */ @@ -770,11 +778,11 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu) PAGE_SIZE); if (ret) { dev_err(gpu->dev, "could not create command buffer\n"); - goto destroy_iommu; + goto unmap_suballoc; } if (gpu->mmu->version == ETNAVIV_IOMMU_V1 && - etnaviv_cmdbuf_get_va(&gpu->buffer) > 0x80000000) { + etnaviv_cmdbuf_get_va(&gpu->buffer, &gpu->cmdbuf_mapping) > 0x80000000) { ret = -EINVAL; dev_err(gpu->dev, "command buffer outside valid memory window\n"); @@ -802,6 +810,11 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu) free_buffer: etnaviv_cmdbuf_free(&gpu->buffer); gpu->buffer.suballoc = NULL; +unmap_suballoc: + etnaviv_cmdbuf_suballoc_unmap(gpu->mmu, &gpu->cmdbuf_mapping); +destroy_suballoc: + etnaviv_cmdbuf_suballoc_destroy(gpu->cmdbuf_suballoc); + gpu->cmdbuf_suballoc = NULL; destroy_iommu: etnaviv_iommu_destroy(gpu->mmu); gpu->mmu = NULL; @@ -1678,6 +1691,9 @@ static void etnaviv_gpu_unbind(struct device *dev, struct device *master, if (gpu->buffer.suballoc) etnaviv_cmdbuf_free(&gpu->buffer); + if (gpu->cmdbuf_mapping.use == 1) + etnaviv_cmdbuf_suballoc_unmap(gpu->mmu, &gpu->cmdbuf_mapping); + if (gpu->cmdbuf_suballoc) { etnaviv_cmdbuf_suballoc_destroy(gpu->cmdbuf_suballoc); gpu->cmdbuf_suballoc = NULL; diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.h b/drivers/gpu/drm/etnaviv/etnaviv_gpu.h index 9bcf151f706b..8c68869ba180 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.h +++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.h @@ -7,6 +7,7 @@ #define __ETNAVIV_GPU_H__ #include "etnaviv_cmdbuf.h" +#include "etnaviv_gem.h" #include "etnaviv_drv.h" struct etnaviv_gem_submit; @@ -84,7 +85,6 @@ struct etnaviv_event { }; struct etnaviv_cmdbuf_suballoc; -struct etnaviv_cmdbuf; struct regulator; struct clk; @@ -101,6 +101,7 @@ struct etnaviv_gpu { struct drm_gpu_scheduler sched; /* 'ring'-buffer: */ + struct etnaviv_vram_mapping cmdbuf_mapping; struct etnaviv_cmdbuf buffer; int exec_state; diff --git a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c index 8069f9f36a2e..070509a1f949 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c @@ -332,52 +332,62 @@ void etnaviv_iommu_restore(struct etnaviv_gpu *gpu) etnaviv_iommuv2_restore(gpu); } -int etnaviv_iommu_get_suballoc_va(struct etnaviv_gpu *gpu, dma_addr_t paddr, - struct drm_mm_node *vram_node, size_t size, - u32 *iova) +int etnaviv_iommu_get_suballoc_va(struct etnaviv_iommu *mmu, + struct etnaviv_vram_mapping *mapping, + u32 memory_base, dma_addr_t paddr, + size_t size) { - struct etnaviv_iommu *mmu = gpu->mmu; + struct drm_mm_node *node; + int ret; if (mmu->version == ETNAVIV_IOMMU_V1) { - *iova = paddr - gpu->memory_base; + mapping->iova = paddr - memory_base; + mapping->use = 1; + list_add_tail(&mapping->mmu_node, &mmu->mappings); return 0; - } else { - int ret; + } - mutex_lock(&mmu->lock); - ret = etnaviv_iommu_find_iova(mmu, vram_node, size); - if (ret < 0) { - mutex_unlock(&mmu->lock); - return ret; - } - ret = etnaviv_domain_map(mmu->domain, vram_node->start, paddr, - size, ETNAVIV_PROT_READ); - if (ret < 0) { - drm_mm_remove_node(vram_node); - mutex_unlock(&mmu->lock); - return ret; - } - gpu->mmu->need_flush = true; - mutex_unlock(&mmu->lock); + node = &mapping->vram_node; - *iova = (u32)vram_node->start; - return 0; + mutex_lock(&mmu->lock); + + ret = etnaviv_iommu_find_iova(mmu, node, size); + if (ret < 0) + goto unlock; + + mapping->iova = node->start; + ret = etnaviv_domain_map(mmu->domain, node->start, paddr, size, + ETNAVIV_PROT_READ); + + if (ret < 0) { + drm_mm_remove_node(node); + goto unlock; } + + list_add_tail(&mapping->mmu_node, &mmu->mappings); + mmu->need_flush = true; +unlock: + mutex_unlock(&mmu->lock); + + return ret; } -void etnaviv_iommu_put_suballoc_va(struct etnaviv_gpu *gpu, - struct drm_mm_node *vram_node, size_t size, - u32 iova) +void etnaviv_iommu_put_suballoc_va(struct etnaviv_iommu *mmu, + struct etnaviv_vram_mapping *mapping) { - struct etnaviv_iommu *mmu = gpu->mmu; + struct drm_mm_node *node = &mapping->vram_node; - if (mmu->version == ETNAVIV_IOMMU_V2) { - mutex_lock(&mmu->lock); - etnaviv_domain_unmap(mmu->domain, iova, size); - drm_mm_remove_node(vram_node); - mutex_unlock(&mmu->lock); - } + mapping->use = 0; + + if (mmu->version == ETNAVIV_IOMMU_V1) + return; + + mutex_lock(&mmu->lock); + etnaviv_domain_unmap(mmu->domain, node->start, node->size); + drm_mm_remove_node(node); + mutex_unlock(&mmu->lock); } + size_t etnaviv_iommu_dump_size(struct etnaviv_iommu *iommu) { return iommu->domain->ops->dump_size(iommu->domain); diff --git a/drivers/gpu/drm/etnaviv/etnaviv_mmu.h b/drivers/gpu/drm/etnaviv/etnaviv_mmu.h index a0db17ffb686..fe1c9d6b9334 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_mmu.h +++ b/drivers/gpu/drm/etnaviv/etnaviv_mmu.h @@ -59,12 +59,12 @@ int etnaviv_iommu_map_gem(struct etnaviv_iommu *mmu, void etnaviv_iommu_unmap_gem(struct etnaviv_iommu *mmu, struct etnaviv_vram_mapping *mapping); -int etnaviv_iommu_get_suballoc_va(struct etnaviv_gpu *gpu, dma_addr_t paddr, - struct drm_mm_node *vram_node, size_t size, - u32 *iova); -void etnaviv_iommu_put_suballoc_va(struct etnaviv_gpu *gpu, - struct drm_mm_node *vram_node, size_t size, - u32 iova); +int etnaviv_iommu_get_suballoc_va(struct etnaviv_iommu *mmu, + struct etnaviv_vram_mapping *mapping, + u32 memory_base, dma_addr_t paddr, + size_t size); +void etnaviv_iommu_put_suballoc_va(struct etnaviv_iommu *mmu, + struct etnaviv_vram_mapping *mapping); size_t etnaviv_iommu_dump_size(struct etnaviv_iommu *iommu); void etnaviv_iommu_dump(struct etnaviv_iommu *iommu, void *buf); From patchwork Wed Dec 19 14:45:40 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lucas Stach X-Patchwork-Id: 10737379 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A71F613B5 for ; Wed, 19 Dec 2018 14:46:25 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 969B029D39 for ; Wed, 19 Dec 2018 14:46:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 884D32A111; Wed, 19 Dec 2018 14:46:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id F1DD729997 for ; Wed, 19 Dec 2018 14:46:24 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 10E976F008; Wed, 19 Dec 2018 14:46:20 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from metis.ext.pengutronix.de (metis.ext.pengutronix.de [IPv6:2001:67c:670:201:290:27ff:fe1d:cc33]) by gabe.freedesktop.org (Postfix) with ESMTPS id DD0C06EFFB for ; Wed, 19 Dec 2018 14:45:50 +0000 (UTC) Received: from dude.hi.pengutronix.de ([2001:67c:670:100:1d::7] helo=dude.pengutronix.de.) by metis.ext.pengutronix.de with esmtp (Exim 4.89) (envelope-from ) id 1gZd6Z-0006y1-5w; Wed, 19 Dec 2018 15:45:47 +0100 From: Lucas Stach To: etnaviv@lists.freedesktop.org Subject: [PATCH 04/10] drm/etnaviv: share a single cmdbuf suballoc region across all GPUs Date: Wed, 19 Dec 2018 15:45:40 +0100 Message-Id: <20181219144546.28224-5-l.stach@pengutronix.de> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181219144546.28224-1-l.stach@pengutronix.de> References: <20181219144546.28224-1-l.stach@pengutronix.de> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 2001:67c:670:100:1d::7 X-SA-Exim-Mail-From: l.stach@pengutronix.de X-SA-Exim-Scanned: No (on metis.ext.pengutronix.de); SAEximRunCond expanded to false X-PTX-Original-Recipient: dri-devel@lists.freedesktop.org X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: patchwork-lst@pengutronix.de, kernel@pengutronix.de, dri-devel@lists.freedesktop.org, Russell King Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP There is no need for each GPU to have it's own cmdbuf suballocation region. Only allocate a single one for the the etnaviv virtual device and share it across all GPUs. Signed-off-by: Lucas Stach --- drivers/gpu/drm/etnaviv/etnaviv_cmdbuf.c | 14 ++++++------ drivers/gpu/drm/etnaviv/etnaviv_cmdbuf.h | 4 ++-- drivers/gpu/drm/etnaviv/etnaviv_drv.c | 11 +++++++++- drivers/gpu/drm/etnaviv/etnaviv_drv.h | 2 ++ drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c | 2 +- drivers/gpu/drm/etnaviv/etnaviv_gpu.c | 23 ++++---------------- drivers/gpu/drm/etnaviv/etnaviv_gpu.h | 3 +-- 7 files changed, 27 insertions(+), 32 deletions(-) diff --git a/drivers/gpu/drm/etnaviv/etnaviv_cmdbuf.c b/drivers/gpu/drm/etnaviv/etnaviv_cmdbuf.c index 1bc529399783..a01ae32dcd88 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_cmdbuf.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_cmdbuf.c @@ -10,13 +10,13 @@ #include "etnaviv_gpu.h" #include "etnaviv_mmu.h" -#define SUBALLOC_SIZE SZ_256K +#define SUBALLOC_SIZE SZ_512K #define SUBALLOC_GRANULE SZ_4K #define SUBALLOC_GRANULES (SUBALLOC_SIZE / SUBALLOC_GRANULE) struct etnaviv_cmdbuf_suballoc { /* suballocated dma buffer properties */ - struct etnaviv_gpu *gpu; + struct device *dev; void *vaddr; dma_addr_t paddr; @@ -28,7 +28,7 @@ struct etnaviv_cmdbuf_suballoc { }; struct etnaviv_cmdbuf_suballoc * -etnaviv_cmdbuf_suballoc_new(struct etnaviv_gpu * gpu) +etnaviv_cmdbuf_suballoc_new(struct device *dev) { struct etnaviv_cmdbuf_suballoc *suballoc; int ret; @@ -37,11 +37,11 @@ etnaviv_cmdbuf_suballoc_new(struct etnaviv_gpu * gpu) if (!suballoc) return ERR_PTR(-ENOMEM); - suballoc->gpu = gpu; + suballoc->dev = dev; mutex_init(&suballoc->lock); init_waitqueue_head(&suballoc->free_event); - suballoc->vaddr = dma_alloc_wc(gpu->dev, SUBALLOC_SIZE, + suballoc->vaddr = dma_alloc_wc(dev, SUBALLOC_SIZE, &suballoc->paddr, GFP_KERNEL); if (!suballoc->vaddr) { ret = -ENOMEM; @@ -73,7 +73,7 @@ void etnaviv_cmdbuf_suballoc_unmap(struct etnaviv_iommu *mmu, void etnaviv_cmdbuf_suballoc_destroy(struct etnaviv_cmdbuf_suballoc *suballoc) { - dma_free_wc(suballoc->gpu->dev, SUBALLOC_SIZE, suballoc->vaddr, + dma_free_wc(suballoc->dev, SUBALLOC_SIZE, suballoc->vaddr, suballoc->paddr); kfree(suballoc); } @@ -98,7 +98,7 @@ int etnaviv_cmdbuf_init(struct etnaviv_cmdbuf_suballoc *suballoc, suballoc->free_space, msecs_to_jiffies(10 * 1000)); if (!ret) { - dev_err(suballoc->gpu->dev, + dev_err(suballoc->dev, "Timeout waiting for cmdbuf space\n"); return -ETIMEDOUT; } diff --git a/drivers/gpu/drm/etnaviv/etnaviv_cmdbuf.h b/drivers/gpu/drm/etnaviv/etnaviv_cmdbuf.h index 11d95f05c017..7fdc2e3fea5f 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_cmdbuf.h +++ b/drivers/gpu/drm/etnaviv/etnaviv_cmdbuf.h @@ -8,7 +8,7 @@ #include -struct etnaviv_gpu; +struct device; struct etnaviv_iommu; struct etnaviv_vram_mapping; struct etnaviv_cmdbuf_suballoc; @@ -24,7 +24,7 @@ struct etnaviv_cmdbuf { }; struct etnaviv_cmdbuf_suballoc * -etnaviv_cmdbuf_suballoc_new(struct etnaviv_gpu * gpu); +etnaviv_cmdbuf_suballoc_new(struct device *dev); void etnaviv_cmdbuf_suballoc_destroy(struct etnaviv_cmdbuf_suballoc *suballoc); int etnaviv_cmdbuf_suballoc_map(struct etnaviv_cmdbuf_suballoc *suballoc, struct etnaviv_iommu *mmu, diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.c b/drivers/gpu/drm/etnaviv/etnaviv_drv.c index 96efc84396bf..b68c28e994c9 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_drv.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.c @@ -30,7 +30,7 @@ static void load_gpu(struct drm_device *dev) if (g) { int ret; - ret = etnaviv_gpu_init(g); + ret = etnaviv_gpu_init(priv, g); if (ret) priv->gpu[i] = NULL; } @@ -525,6 +525,13 @@ static int etnaviv_bind(struct device *dev) INIT_LIST_HEAD(&priv->gem_list); priv->num_gpus = 0; + priv->cmdbuf_suballoc = etnaviv_cmdbuf_suballoc_new(drm->dev); + if (IS_ERR(priv->cmdbuf_suballoc)) { + dev_err(drm->dev, "Failed to create cmdbuf suballocator\n"); + ret = PTR_ERR(priv->cmdbuf_suballoc); + goto out_unref; + } + dev_set_drvdata(dev, drm); ret = component_bind_all(dev, drm); @@ -558,6 +565,8 @@ static void etnaviv_unbind(struct device *dev) component_unbind_all(dev, drm); + etnaviv_cmdbuf_suballoc_destroy(priv->cmdbuf_suballoc); + drm->dev_private = NULL; kfree(priv); diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h index 4bf698de5996..5c1b6ea25e59 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h +++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h @@ -45,6 +45,8 @@ struct etnaviv_drm_private { int num_gpus; struct etnaviv_gpu *gpu[ETNA_MAX_PIPES]; + struct etnaviv_cmdbuf_suballoc *cmdbuf_suballoc; + /* list of GEM objects: */ struct mutex gem_lock; struct list_head gem_list; diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c index 86c005ab0c91..989baa7275da 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c @@ -501,7 +501,7 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data, goto err_submit_ww_acquire; } - ret = etnaviv_cmdbuf_init(gpu->cmdbuf_suballoc, &submit->cmdbuf, + ret = etnaviv_cmdbuf_init(priv->cmdbuf_suballoc, &submit->cmdbuf, ALIGN(args->stream_size, 8) + 8); if (ret) goto err_submit_objects; diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c index 1b11e6c393a0..70351568b3a4 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c @@ -689,7 +689,7 @@ static void etnaviv_gpu_hw_init(struct etnaviv_gpu *gpu) &gpu->cmdbuf_mapping), prefetch); } -int etnaviv_gpu_init(struct etnaviv_gpu *gpu) +int etnaviv_gpu_init(struct etnaviv_drm_private *priv, struct etnaviv_gpu *gpu) { int ret, i; @@ -758,23 +758,16 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu) goto fail; } - gpu->cmdbuf_suballoc = etnaviv_cmdbuf_suballoc_new(gpu); - if (IS_ERR(gpu->cmdbuf_suballoc)) { - dev_err(gpu->dev, "Failed to create cmdbuf suballocator\n"); - ret = PTR_ERR(gpu->cmdbuf_suballoc); - goto destroy_iommu; - } - - ret = etnaviv_cmdbuf_suballoc_map(gpu->cmdbuf_suballoc, gpu->mmu, + ret = etnaviv_cmdbuf_suballoc_map(priv->cmdbuf_suballoc, gpu->mmu, &gpu->cmdbuf_mapping, gpu->memory_base); if (ret) { dev_err(gpu->dev, "failed to map cmdbuf suballoc\n"); - goto destroy_suballoc; + goto destroy_iommu; } /* Create buffer: */ - ret = etnaviv_cmdbuf_init(gpu->cmdbuf_suballoc, &gpu->buffer, + ret = etnaviv_cmdbuf_init(priv->cmdbuf_suballoc, &gpu->buffer, PAGE_SIZE); if (ret) { dev_err(gpu->dev, "could not create command buffer\n"); @@ -812,9 +805,6 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu) gpu->buffer.suballoc = NULL; unmap_suballoc: etnaviv_cmdbuf_suballoc_unmap(gpu->mmu, &gpu->cmdbuf_mapping); -destroy_suballoc: - etnaviv_cmdbuf_suballoc_destroy(gpu->cmdbuf_suballoc); - gpu->cmdbuf_suballoc = NULL; destroy_iommu: etnaviv_iommu_destroy(gpu->mmu); gpu->mmu = NULL; @@ -1694,11 +1684,6 @@ static void etnaviv_gpu_unbind(struct device *dev, struct device *master, if (gpu->cmdbuf_mapping.use == 1) etnaviv_cmdbuf_suballoc_unmap(gpu->mmu, &gpu->cmdbuf_mapping); - if (gpu->cmdbuf_suballoc) { - etnaviv_cmdbuf_suballoc_destroy(gpu->cmdbuf_suballoc); - gpu->cmdbuf_suballoc = NULL; - } - if (gpu->mmu) { etnaviv_iommu_destroy(gpu->mmu); gpu->mmu = NULL; diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.h b/drivers/gpu/drm/etnaviv/etnaviv_gpu.h index 8c68869ba180..004f2cdfb4e0 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.h +++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.h @@ -136,7 +136,6 @@ struct etnaviv_gpu { int irq; struct etnaviv_iommu *mmu; - struct etnaviv_cmdbuf_suballoc *cmdbuf_suballoc; /* Power Control: */ struct clk *clk_bus; @@ -161,7 +160,7 @@ static inline u32 gpu_read(struct etnaviv_gpu *gpu, u32 reg) int etnaviv_gpu_get_param(struct etnaviv_gpu *gpu, u32 param, u64 *value); -int etnaviv_gpu_init(struct etnaviv_gpu *gpu); +int etnaviv_gpu_init(struct etnaviv_drm_private *priv, struct etnaviv_gpu *gpu); bool etnaviv_fill_identity_from_hwdb(struct etnaviv_gpu *gpu); #ifdef CONFIG_DEBUG_FS From patchwork Wed Dec 19 14:45:41 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lucas Stach X-Patchwork-Id: 10737363 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F15DB13B5 for ; Wed, 19 Dec 2018 14:45:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E1B06296AD for ; Wed, 19 Dec 2018 14:45:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D54D12981D; Wed, 19 Dec 2018 14:45:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 7BB6E296AD for ; Wed, 19 Dec 2018 14:45:57 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3A9876EFF4; Wed, 19 Dec 2018 14:45:52 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from metis.ext.pengutronix.de (metis.ext.pengutronix.de [IPv6:2001:67c:670:201:290:27ff:fe1d:cc33]) by gabe.freedesktop.org (Postfix) with ESMTPS id E65636EFFC for ; Wed, 19 Dec 2018 14:45:50 +0000 (UTC) Received: from dude.hi.pengutronix.de ([2001:67c:670:100:1d::7] helo=dude.pengutronix.de.) by metis.ext.pengutronix.de with esmtp (Exim 4.89) (envelope-from ) id 1gZd6Z-0006y1-8G; Wed, 19 Dec 2018 15:45:47 +0100 From: Lucas Stach To: etnaviv@lists.freedesktop.org Subject: [PATCH 05/10] drm/etnaviv: replace MMU flush marker with flush sequence Date: Wed, 19 Dec 2018 15:45:41 +0100 Message-Id: <20181219144546.28224-6-l.stach@pengutronix.de> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181219144546.28224-1-l.stach@pengutronix.de> References: <20181219144546.28224-1-l.stach@pengutronix.de> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 2001:67c:670:100:1d::7 X-SA-Exim-Mail-From: l.stach@pengutronix.de X-SA-Exim-Scanned: No (on metis.ext.pengutronix.de); SAEximRunCond expanded to false X-PTX-Original-Recipient: dri-devel@lists.freedesktop.org X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: patchwork-lst@pengutronix.de, kernel@pengutronix.de, dri-devel@lists.freedesktop.org, Russell King Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP If a MMU is shared between multiple GPUs, all of them need to flush their TLBs, so a single marker that gets reset on the first flush won't do. Replace the flush marker with a sequence number, so that it's possible to check if the TLB is in sync with the current page table state for each GPU. Signed-off-by: Lucas Stach --- drivers/gpu/drm/etnaviv/etnaviv_buffer.c | 9 +++++---- drivers/gpu/drm/etnaviv/etnaviv_gpu.h | 1 + drivers/gpu/drm/etnaviv/etnaviv_mmu.c | 6 +++--- drivers/gpu/drm/etnaviv/etnaviv_mmu.h | 2 +- 4 files changed, 10 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/etnaviv/etnaviv_buffer.c b/drivers/gpu/drm/etnaviv/etnaviv_buffer.c index 401adf905d95..d52c01c195bd 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_buffer.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_buffer.c @@ -313,6 +313,7 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, u32 exec_state, u32 return_target, return_dwords; u32 link_target, link_dwords; bool switch_context = gpu->exec_state != exec_state; + bool need_flush = gpu->flush_seq != gpu->mmu->flush_seq; lockdep_assert_held(&gpu->lock); @@ -327,14 +328,14 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, u32 exec_state, * need to append a mmu flush load state, followed by a new * link to this buffer - a total of four additional words. */ - if (gpu->mmu->need_flush || switch_context) { + if (need_flush || switch_context) { u32 target, extra_dwords; /* link command */ extra_dwords = 1; /* flush command */ - if (gpu->mmu->need_flush) { + if (need_flush) { if (gpu->mmu->version == ETNAVIV_IOMMU_V1) extra_dwords += 1; else @@ -347,7 +348,7 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, u32 exec_state, target = etnaviv_buffer_reserve(gpu, buffer, extra_dwords); - if (gpu->mmu->need_flush) { + if (need_flush) { /* Add the MMU flush */ if (gpu->mmu->version == ETNAVIV_IOMMU_V1) { CMD_LOAD_STATE(buffer, VIVS_GL_FLUSH_MMU, @@ -367,7 +368,7 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, u32 exec_state, SYNC_RECIPIENT_PE); } - gpu->mmu->need_flush = false; + gpu->flush_seq = gpu->mmu->flush_seq; } if (switch_context) { diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.h b/drivers/gpu/drm/etnaviv/etnaviv_gpu.h index 004f2cdfb4e0..9ab0b4548e55 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.h +++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.h @@ -136,6 +136,7 @@ struct etnaviv_gpu { int irq; struct etnaviv_iommu *mmu; + unsigned int flush_seq; /* Power Control: */ struct clk *clk_bus; diff --git a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c index 070509a1f949..c4092c8def4f 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c @@ -261,7 +261,7 @@ int etnaviv_iommu_map_gem(struct etnaviv_iommu *mmu, } list_add_tail(&mapping->mmu_node, &mmu->mappings); - mmu->need_flush = true; + mmu->flush_seq++; unlock: mutex_unlock(&mmu->lock); @@ -280,7 +280,7 @@ void etnaviv_iommu_unmap_gem(struct etnaviv_iommu *mmu, etnaviv_iommu_remove_mapping(mmu, mapping); list_del(&mapping->mmu_node); - mmu->need_flush = true; + mmu->flush_seq++; mutex_unlock(&mmu->lock); } @@ -365,7 +365,7 @@ int etnaviv_iommu_get_suballoc_va(struct etnaviv_iommu *mmu, } list_add_tail(&mapping->mmu_node, &mmu->mappings); - mmu->need_flush = true; + mmu->flush_seq++; unlock: mutex_unlock(&mmu->lock); diff --git a/drivers/gpu/drm/etnaviv/etnaviv_mmu.h b/drivers/gpu/drm/etnaviv/etnaviv_mmu.h index fe1c9d6b9334..34afe25df9ca 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_mmu.h +++ b/drivers/gpu/drm/etnaviv/etnaviv_mmu.h @@ -48,7 +48,7 @@ struct etnaviv_iommu { struct mutex lock; struct list_head mappings; struct drm_mm mm; - bool need_flush; + unsigned int flush_seq; }; struct etnaviv_gem_object; From patchwork Wed Dec 19 14:45:42 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lucas Stach X-Patchwork-Id: 10737369 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 43BF013B5 for ; Wed, 19 Dec 2018 14:46:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 309C42969C for ; Wed, 19 Dec 2018 14:46:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2421D29D39; Wed, 19 Dec 2018 14:46:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 15D6F2969C for ; Wed, 19 Dec 2018 14:46:01 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 10B526EFFD; Wed, 19 Dec 2018 14:45:53 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from metis.ext.pengutronix.de (metis.ext.pengutronix.de [IPv6:2001:67c:670:201:290:27ff:fe1d:cc33]) by gabe.freedesktop.org (Postfix) with ESMTPS id 844D66EFF4 for ; Wed, 19 Dec 2018 14:45:51 +0000 (UTC) Received: from dude.hi.pengutronix.de ([2001:67c:670:100:1d::7] helo=dude.pengutronix.de.) by metis.ext.pengutronix.de with esmtp (Exim 4.89) (envelope-from ) id 1gZd6Z-0006y1-Ad; Wed, 19 Dec 2018 15:45:47 +0100 From: Lucas Stach To: etnaviv@lists.freedesktop.org Subject: [PATCH 06/10] drm/etnaviv: rework MMU handling Date: Wed, 19 Dec 2018 15:45:42 +0100 Message-Id: <20181219144546.28224-7-l.stach@pengutronix.de> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181219144546.28224-1-l.stach@pengutronix.de> References: <20181219144546.28224-1-l.stach@pengutronix.de> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 2001:67c:670:100:1d::7 X-SA-Exim-Mail-From: l.stach@pengutronix.de X-SA-Exim-Scanned: No (on metis.ext.pengutronix.de); SAEximRunCond expanded to false X-PTX-Original-Recipient: dri-devel@lists.freedesktop.org X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: patchwork-lst@pengutronix.de, kernel@pengutronix.de, dri-devel@lists.freedesktop.org, Russell King Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP This reworks the MMU handling to make it possible to have multiple MMU contexts, not one per GPU. This commit doesn't actually do anything with those contexts, aside from giving one of them to each GPU. The code changes are pretty invasive, but there is no sane way to split this into smaller changes. Signed-off-by: Lucas Stach --- drivers/gpu/drm/etnaviv/etnaviv_buffer.c | 8 +- drivers/gpu/drm/etnaviv/etnaviv_cmdbuf.c | 8 +- drivers/gpu/drm/etnaviv/etnaviv_cmdbuf.h | 6 +- drivers/gpu/drm/etnaviv/etnaviv_drv.c | 1 + drivers/gpu/drm/etnaviv/etnaviv_drv.h | 4 +- drivers/gpu/drm/etnaviv/etnaviv_gem.c | 14 +- drivers/gpu/drm/etnaviv/etnaviv_gem.h | 2 +- drivers/gpu/drm/etnaviv/etnaviv_gpu.c | 24 +- drivers/gpu/drm/etnaviv/etnaviv_gpu.h | 3 +- drivers/gpu/drm/etnaviv/etnaviv_iommu.c | 150 ++++++------ drivers/gpu/drm/etnaviv/etnaviv_iommu.h | 20 -- drivers/gpu/drm/etnaviv/etnaviv_iommu_v2.c | 264 +++++++++------------ drivers/gpu/drm/etnaviv/etnaviv_mmu.c | 231 ++++++++++-------- drivers/gpu/drm/etnaviv/etnaviv_mmu.h | 88 +++++-- 14 files changed, 416 insertions(+), 407 deletions(-) delete mode 100644 drivers/gpu/drm/etnaviv/etnaviv_iommu.h diff --git a/drivers/gpu/drm/etnaviv/etnaviv_buffer.c b/drivers/gpu/drm/etnaviv/etnaviv_buffer.c index d52c01c195bd..e1347630fa11 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_buffer.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_buffer.c @@ -205,7 +205,7 @@ u16 etnaviv_buffer_config_mmuv2(struct etnaviv_gpu *gpu, u32 mtlb_addr, u32 safe return buffer->user_size / 8; } -u16 etnaviv_buffer_config_pta(struct etnaviv_gpu *gpu) +u16 etnaviv_buffer_config_pta(struct etnaviv_gpu *gpu, unsigned short id) { struct etnaviv_cmdbuf *buffer = &gpu->buffer; @@ -214,7 +214,7 @@ u16 etnaviv_buffer_config_pta(struct etnaviv_gpu *gpu) buffer->user_size = 0; CMD_LOAD_STATE(buffer, VIVS_MMUv2_PTA_CONFIG, - VIVS_MMUv2_PTA_CONFIG_INDEX(0)); + VIVS_MMUv2_PTA_CONFIG_INDEX(id)); CMD_END(buffer); @@ -336,7 +336,7 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, u32 exec_state, /* flush command */ if (need_flush) { - if (gpu->mmu->version == ETNAVIV_IOMMU_V1) + if (gpu->mmu->global->version == ETNAVIV_IOMMU_V1) extra_dwords += 1; else extra_dwords += 3; @@ -350,7 +350,7 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, u32 exec_state, if (need_flush) { /* Add the MMU flush */ - if (gpu->mmu->version == ETNAVIV_IOMMU_V1) { + if (gpu->mmu->global->version == ETNAVIV_IOMMU_V1) { CMD_LOAD_STATE(buffer, VIVS_GL_FLUSH_MMU, VIVS_GL_FLUSH_MMU_FLUSH_FEMMU | VIVS_GL_FLUSH_MMU_FLUSH_UNK1 | diff --git a/drivers/gpu/drm/etnaviv/etnaviv_cmdbuf.c b/drivers/gpu/drm/etnaviv/etnaviv_cmdbuf.c index a01ae32dcd88..4d7a2341e11c 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_cmdbuf.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_cmdbuf.c @@ -57,18 +57,18 @@ etnaviv_cmdbuf_suballoc_new(struct device *dev) } int etnaviv_cmdbuf_suballoc_map(struct etnaviv_cmdbuf_suballoc *suballoc, - struct etnaviv_iommu *mmu, + struct etnaviv_iommu_context *context, struct etnaviv_vram_mapping *mapping, u32 memory_base) { - return etnaviv_iommu_get_suballoc_va(mmu, mapping, memory_base, + return etnaviv_iommu_get_suballoc_va(context, mapping, memory_base, suballoc->paddr, SUBALLOC_SIZE); } -void etnaviv_cmdbuf_suballoc_unmap(struct etnaviv_iommu *mmu, +void etnaviv_cmdbuf_suballoc_unmap(struct etnaviv_iommu_context *context, struct etnaviv_vram_mapping *mapping) { - etnaviv_iommu_put_suballoc_va(mmu, mapping); + etnaviv_iommu_put_suballoc_va(context, mapping); } void etnaviv_cmdbuf_suballoc_destroy(struct etnaviv_cmdbuf_suballoc *suballoc) diff --git a/drivers/gpu/drm/etnaviv/etnaviv_cmdbuf.h b/drivers/gpu/drm/etnaviv/etnaviv_cmdbuf.h index 7fdc2e3fea5f..b59dffb8d940 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_cmdbuf.h +++ b/drivers/gpu/drm/etnaviv/etnaviv_cmdbuf.h @@ -9,7 +9,7 @@ #include struct device; -struct etnaviv_iommu; +struct etnaviv_iommu_context; struct etnaviv_vram_mapping; struct etnaviv_cmdbuf_suballoc; @@ -27,10 +27,10 @@ struct etnaviv_cmdbuf_suballoc * etnaviv_cmdbuf_suballoc_new(struct device *dev); void etnaviv_cmdbuf_suballoc_destroy(struct etnaviv_cmdbuf_suballoc *suballoc); int etnaviv_cmdbuf_suballoc_map(struct etnaviv_cmdbuf_suballoc *suballoc, - struct etnaviv_iommu *mmu, + struct etnaviv_iommu_context *context, struct etnaviv_vram_mapping *mapping, u32 memory_base); -void etnaviv_cmdbuf_suballoc_unmap(struct etnaviv_iommu *mmu, +void etnaviv_cmdbuf_suballoc_unmap(struct etnaviv_iommu_context *context, struct etnaviv_vram_mapping *mapping); diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.c b/drivers/gpu/drm/etnaviv/etnaviv_drv.c index b68c28e994c9..103745d8f54c 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_drv.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.c @@ -565,6 +565,7 @@ static void etnaviv_unbind(struct device *dev) component_unbind_all(dev, drm); + etnaviv_iommu_global_fini(priv->mmu_global); etnaviv_cmdbuf_suballoc_destroy(priv->cmdbuf_suballoc); drm->dev_private = NULL; diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h index 5c1b6ea25e59..da933f41688f 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h +++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h @@ -32,6 +32,7 @@ struct etnaviv_gpu; struct etnaviv_mmu; struct etnaviv_gem_object; struct etnaviv_gem_submit; +struct etnaviv_iommu_global; struct etnaviv_file_private { /* @@ -46,6 +47,7 @@ struct etnaviv_drm_private { struct etnaviv_gpu *gpu[ETNA_MAX_PIPES]; struct etnaviv_cmdbuf_suballoc *cmdbuf_suballoc; + struct etnaviv_iommu_global *mmu_global; /* list of GEM objects: */ struct mutex gem_lock; @@ -79,7 +81,7 @@ int etnaviv_gem_new_userptr(struct drm_device *dev, struct drm_file *file, uintptr_t ptr, u32 size, u32 flags, u32 *handle); u16 etnaviv_buffer_init(struct etnaviv_gpu *gpu); u16 etnaviv_buffer_config_mmuv2(struct etnaviv_gpu *gpu, u32 mtlb_addr, u32 safe_addr); -u16 etnaviv_buffer_config_pta(struct etnaviv_gpu *gpu); +u16 etnaviv_buffer_config_pta(struct etnaviv_gpu *gpu, unsigned short id); void etnaviv_buffer_end(struct etnaviv_gpu *gpu); void etnaviv_sync_point_queue(struct etnaviv_gpu *gpu, unsigned int event); void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, u32 exec_state, diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c index 1fa74226db91..c95deba6f668 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c @@ -222,12 +222,12 @@ int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset) static struct etnaviv_vram_mapping * etnaviv_gem_get_vram_mapping(struct etnaviv_gem_object *obj, - struct etnaviv_iommu *mmu) + struct etnaviv_iommu_context *context) { struct etnaviv_vram_mapping *mapping; list_for_each_entry(mapping, &obj->vram_list, obj_node) { - if (mapping->mmu == mmu) + if (mapping->context == context) return mapping; } @@ -277,7 +277,7 @@ struct etnaviv_vram_mapping *etnaviv_gem_mapping_get( */ if (mapping->use == 0) { mutex_lock(&gpu->mmu->lock); - if (mapping->mmu == gpu->mmu) + if (mapping->context == gpu->mmu) mapping->use += 1; else mapping = NULL; @@ -314,7 +314,7 @@ struct etnaviv_vram_mapping *etnaviv_gem_mapping_get( list_del(&mapping->obj_node); } - mapping->mmu = gpu->mmu; + mapping->context = gpu->mmu; mapping->use = 1; ret = etnaviv_iommu_map_gem(gpu->mmu, etnaviv_obj, gpu->memory_base, @@ -536,12 +536,12 @@ void etnaviv_gem_free_object(struct drm_gem_object *obj) list_for_each_entry_safe(mapping, tmp, &etnaviv_obj->vram_list, obj_node) { - struct etnaviv_iommu *mmu = mapping->mmu; + struct etnaviv_iommu_context *context = mapping->context; WARN_ON(mapping->use); - if (mmu) - etnaviv_iommu_unmap_gem(mmu, mapping); + if (context) + etnaviv_iommu_unmap_gem(context, mapping); list_del(&mapping->obj_node); kfree(mapping); diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.h b/drivers/gpu/drm/etnaviv/etnaviv_gem.h index f0abb744ef95..cf94f0b24584 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.h +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.h @@ -25,7 +25,7 @@ struct etnaviv_vram_mapping { struct list_head scan_node; struct list_head mmu_node; struct etnaviv_gem_object *object; - struct etnaviv_iommu *mmu; + struct etnaviv_iommu_context *context; struct drm_mm_node vram_node; unsigned int use; u32 iova; diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c index 70351568b3a4..287995c58134 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c @@ -679,7 +679,7 @@ static void etnaviv_gpu_hw_init(struct etnaviv_gpu *gpu) etnaviv_gpu_setup_pulse_eater(gpu); /* setup the MMU */ - etnaviv_iommu_restore(gpu); + etnaviv_iommu_restore(gpu, gpu->mmu); /* Start command processor */ prefetch = etnaviv_buffer_init(gpu); @@ -691,6 +691,7 @@ static void etnaviv_gpu_hw_init(struct etnaviv_gpu *gpu) int etnaviv_gpu_init(struct etnaviv_drm_private *priv, struct etnaviv_gpu *gpu) { + enum etnaviv_iommu_version mmu_version = ETNAVIV_IOMMU_V1; int ret, i; ret = pm_runtime_get_sync(gpu->dev); @@ -751,7 +752,20 @@ int etnaviv_gpu_init(struct etnaviv_drm_private *priv, struct etnaviv_gpu *gpu) goto fail; } - gpu->mmu = etnaviv_iommu_new(gpu); + if (gpu->identity.minor_features1 & chipMinorFeatures1_MMU_VERSION) + mmu_version = ETNAVIV_IOMMU_V2; + + if (!priv->mmu_global) + priv->mmu_global = etnaviv_iommu_global_init(gpu->drm->dev, + mmu_version); + + if (!priv->mmu_global || priv->mmu_global->version != mmu_version) { + ret = -ENXIO; + dev_err(gpu->dev, "failed to init IOMMU global state\n"); + goto fail; + } + + gpu->mmu = etnaviv_iommu_context_init(priv->mmu_global); if (IS_ERR(gpu->mmu)) { dev_err(gpu->dev, "Failed to instantiate GPU IOMMU\n"); ret = PTR_ERR(gpu->mmu); @@ -774,7 +788,7 @@ int etnaviv_gpu_init(struct etnaviv_drm_private *priv, struct etnaviv_gpu *gpu) goto unmap_suballoc; } - if (gpu->mmu->version == ETNAVIV_IOMMU_V1 && + if (mmu_version == ETNAVIV_IOMMU_V1 && etnaviv_cmdbuf_get_va(&gpu->buffer, &gpu->cmdbuf_mapping) > 0x80000000) { ret = -EINVAL; dev_err(gpu->dev, @@ -806,7 +820,7 @@ int etnaviv_gpu_init(struct etnaviv_drm_private *priv, struct etnaviv_gpu *gpu) unmap_suballoc: etnaviv_cmdbuf_suballoc_unmap(gpu->mmu, &gpu->cmdbuf_mapping); destroy_iommu: - etnaviv_iommu_destroy(gpu->mmu); + etnaviv_iommu_context_put(gpu->mmu); gpu->mmu = NULL; fail: pm_runtime_mark_last_busy(gpu->dev); @@ -1685,7 +1699,7 @@ static void etnaviv_gpu_unbind(struct device *dev, struct device *master, etnaviv_cmdbuf_suballoc_unmap(gpu->mmu, &gpu->cmdbuf_mapping); if (gpu->mmu) { - etnaviv_iommu_destroy(gpu->mmu); + etnaviv_iommu_context_put(gpu->mmu); gpu->mmu = NULL; } diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.h b/drivers/gpu/drm/etnaviv/etnaviv_gpu.h index 9ab0b4548e55..50f03ee55500 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.h +++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.h @@ -8,6 +8,7 @@ #include "etnaviv_cmdbuf.h" #include "etnaviv_gem.h" +#include "etnaviv_mmu.h" #include "etnaviv_drv.h" struct etnaviv_gem_submit; @@ -135,7 +136,7 @@ struct etnaviv_gpu { void __iomem *mmio; int irq; - struct etnaviv_iommu *mmu; + struct etnaviv_iommu_context *mmu; unsigned int flush_seq; /* Power Control: */ diff --git a/drivers/gpu/drm/etnaviv/etnaviv_iommu.c b/drivers/gpu/drm/etnaviv/etnaviv_iommu.c index b163bdbcb880..c93aa0b1ad9e 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_iommu.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_iommu.c @@ -11,7 +11,6 @@ #include "etnaviv_gpu.h" #include "etnaviv_mmu.h" -#include "etnaviv_iommu.h" #include "state_hi.xml.h" #define PT_SIZE SZ_2M @@ -19,113 +18,78 @@ #define GPU_MEM_START 0x80000000 -struct etnaviv_iommuv1_domain { - struct etnaviv_iommu_domain base; +struct etnaviv_iommuv1_context { + struct etnaviv_iommu_context base; u32 *pgtable_cpu; dma_addr_t pgtable_dma; }; -static struct etnaviv_iommuv1_domain * -to_etnaviv_domain(struct etnaviv_iommu_domain *domain) +static struct etnaviv_iommuv1_context * +to_v1_context(struct etnaviv_iommu_context *context) { - return container_of(domain, struct etnaviv_iommuv1_domain, base); + return container_of(context, struct etnaviv_iommuv1_context, base); } -static int __etnaviv_iommu_init(struct etnaviv_iommuv1_domain *etnaviv_domain) +static void etnaviv_iommuv1_free(struct etnaviv_iommu_context *context) { - u32 *p; - int i; - - etnaviv_domain->base.bad_page_cpu = - dma_alloc_wc(etnaviv_domain->base.dev, SZ_4K, - &etnaviv_domain->base.bad_page_dma, - GFP_KERNEL); - if (!etnaviv_domain->base.bad_page_cpu) - return -ENOMEM; - - p = etnaviv_domain->base.bad_page_cpu; - for (i = 0; i < SZ_4K / 4; i++) - *p++ = 0xdead55aa; - - etnaviv_domain->pgtable_cpu = dma_alloc_wc(etnaviv_domain->base.dev, - PT_SIZE, - &etnaviv_domain->pgtable_dma, - GFP_KERNEL); - if (!etnaviv_domain->pgtable_cpu) { - dma_free_wc(etnaviv_domain->base.dev, SZ_4K, - etnaviv_domain->base.bad_page_cpu, - etnaviv_domain->base.bad_page_dma); - return -ENOMEM; - } - - memset32(etnaviv_domain->pgtable_cpu, etnaviv_domain->base.bad_page_dma, - PT_ENTRIES); - - return 0; -} + struct etnaviv_iommuv1_context *v1_context = to_v1_context(context); -static void etnaviv_iommuv1_domain_free(struct etnaviv_iommu_domain *domain) -{ - struct etnaviv_iommuv1_domain *etnaviv_domain = - to_etnaviv_domain(domain); + drm_mm_takedown(&context->mm); - dma_free_wc(etnaviv_domain->base.dev, PT_SIZE, - etnaviv_domain->pgtable_cpu, etnaviv_domain->pgtable_dma); + dma_free_wc(context->global->dev, PT_SIZE, v1_context->pgtable_cpu, + v1_context->pgtable_dma); - dma_free_wc(etnaviv_domain->base.dev, SZ_4K, - etnaviv_domain->base.bad_page_cpu, - etnaviv_domain->base.bad_page_dma); + context->global->v1.shared_context = NULL; - kfree(etnaviv_domain); + kfree(v1_context); } -static int etnaviv_iommuv1_map(struct etnaviv_iommu_domain *domain, +static int etnaviv_iommuv1_map(struct etnaviv_iommu_context *context, unsigned long iova, phys_addr_t paddr, size_t size, int prot) { - struct etnaviv_iommuv1_domain *etnaviv_domain = to_etnaviv_domain(domain); + struct etnaviv_iommuv1_context *v1_context = to_v1_context(context); unsigned int index = (iova - GPU_MEM_START) / SZ_4K; if (size != SZ_4K) return -EINVAL; - etnaviv_domain->pgtable_cpu[index] = paddr; + v1_context->pgtable_cpu[index] = paddr; return 0; } -static size_t etnaviv_iommuv1_unmap(struct etnaviv_iommu_domain *domain, +static size_t etnaviv_iommuv1_unmap(struct etnaviv_iommu_context *context, unsigned long iova, size_t size) { - struct etnaviv_iommuv1_domain *etnaviv_domain = - to_etnaviv_domain(domain); + struct etnaviv_iommuv1_context *v1_context = to_v1_context(context); unsigned int index = (iova - GPU_MEM_START) / SZ_4K; if (size != SZ_4K) return -EINVAL; - etnaviv_domain->pgtable_cpu[index] = etnaviv_domain->base.bad_page_dma; + v1_context->pgtable_cpu[index] = context->global->bad_page_dma; return SZ_4K; } -static size_t etnaviv_iommuv1_dump_size(struct etnaviv_iommu_domain *domain) +static size_t etnaviv_iommuv1_dump_size(struct etnaviv_iommu_context *context) { return PT_SIZE; } -static void etnaviv_iommuv1_dump(struct etnaviv_iommu_domain *domain, void *buf) +static void etnaviv_iommuv1_dump(struct etnaviv_iommu_context *context, + void *buf) { - struct etnaviv_iommuv1_domain *etnaviv_domain = - to_etnaviv_domain(domain); + struct etnaviv_iommuv1_context *v1_context = to_v1_context(context); - memcpy(buf, etnaviv_domain->pgtable_cpu, PT_SIZE); + memcpy(buf, v1_context->pgtable_cpu, PT_SIZE); } -void etnaviv_iommuv1_restore(struct etnaviv_gpu *gpu) +static void etnaviv_iommuv1_restore(struct etnaviv_gpu *gpu, + struct etnaviv_iommu_context *context) { - struct etnaviv_iommuv1_domain *etnaviv_domain = - to_etnaviv_domain(gpu->mmu->domain); + struct etnaviv_iommuv1_context *v1_context = to_v1_context(context); u32 pgtable; /* set base addresses */ @@ -136,7 +100,7 @@ void etnaviv_iommuv1_restore(struct etnaviv_gpu *gpu) gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_PE, gpu->memory_base); /* set page table address in MC */ - pgtable = (u32)etnaviv_domain->pgtable_dma; + pgtable = (u32)v1_context->pgtable_dma; gpu_write(gpu, VIVS_MC_MMU_FE_PAGE_TABLE, pgtable); gpu_write(gpu, VIVS_MC_MMU_TX_PAGE_TABLE, pgtable); @@ -145,39 +109,61 @@ void etnaviv_iommuv1_restore(struct etnaviv_gpu *gpu) gpu_write(gpu, VIVS_MC_MMU_RA_PAGE_TABLE, pgtable); } -static const struct etnaviv_iommu_domain_ops etnaviv_iommuv1_ops = { - .free = etnaviv_iommuv1_domain_free, + +const struct etnaviv_iommu_ops etnaviv_iommuv1_ops = { + .free = etnaviv_iommuv1_free, .map = etnaviv_iommuv1_map, .unmap = etnaviv_iommuv1_unmap, .dump_size = etnaviv_iommuv1_dump_size, .dump = etnaviv_iommuv1_dump, + .restore = etnaviv_iommuv1_restore, }; -struct etnaviv_iommu_domain * -etnaviv_iommuv1_domain_alloc(struct etnaviv_gpu *gpu) +struct etnaviv_iommu_context * +etnaviv_iommuv1_context_alloc(struct etnaviv_iommu_global *global) { - struct etnaviv_iommuv1_domain *etnaviv_domain; - struct etnaviv_iommu_domain *domain; - int ret; + struct etnaviv_iommuv1_context *v1_context; + struct etnaviv_iommu_context *context; + + mutex_lock(&global->lock); + + /* MMUv1 does not support switching between different contexts without + * a stop the world operation, so we only support a single shared + * context with this version. + */ + if (global->v1.shared_context) { + context = global->v1.shared_context; + etnaviv_iommu_context_get(context); + mutex_unlock(&global->lock); + return context; + } - etnaviv_domain = kzalloc(sizeof(*etnaviv_domain), GFP_KERNEL); - if (!etnaviv_domain) + v1_context = kzalloc(sizeof(*v1_context), GFP_KERNEL); + if (!v1_context) return NULL; - domain = &etnaviv_domain->base; + v1_context->pgtable_cpu = dma_alloc_wc(global->dev, PT_SIZE, + &v1_context->pgtable_dma, + GFP_KERNEL); + if (!v1_context->pgtable_cpu) + goto out_free; - domain->dev = gpu->dev; - domain->base = GPU_MEM_START; - domain->size = PT_ENTRIES * SZ_4K; - domain->ops = &etnaviv_iommuv1_ops; + memset32(v1_context->pgtable_cpu, global->bad_page_dma, PT_ENTRIES); - ret = __etnaviv_iommu_init(etnaviv_domain); - if (ret) - goto out_free; + context = &v1_context->base; + context->global = global; + kref_init(&context->refcount); + mutex_init(&context->lock); + INIT_LIST_HEAD(&context->mappings); + drm_mm_init(&context->mm, GPU_MEM_START, PT_ENTRIES * SZ_4K); + context->global->v1.shared_context = context; + + mutex_unlock(&global->lock); - return &etnaviv_domain->base; + return context; out_free: - kfree(etnaviv_domain); + mutex_unlock(&global->lock); + kfree(v1_context); return NULL; } diff --git a/drivers/gpu/drm/etnaviv/etnaviv_iommu.h b/drivers/gpu/drm/etnaviv/etnaviv_iommu.h deleted file mode 100644 index b279404ce91a..000000000000 --- a/drivers/gpu/drm/etnaviv/etnaviv_iommu.h +++ /dev/null @@ -1,20 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -/* - * Copyright (C) 2014-2018 Etnaviv Project - */ - -#ifndef __ETNAVIV_IOMMU_H__ -#define __ETNAVIV_IOMMU_H__ - -struct etnaviv_gpu; -struct etnaviv_iommu_domain; - -struct etnaviv_iommu_domain * -etnaviv_iommuv1_domain_alloc(struct etnaviv_gpu *gpu); -void etnaviv_iommuv1_restore(struct etnaviv_gpu *gpu); - -struct etnaviv_iommu_domain * -etnaviv_iommuv2_domain_alloc(struct etnaviv_gpu *gpu); -void etnaviv_iommuv2_restore(struct etnaviv_gpu *gpu); - -#endif /* __ETNAVIV_IOMMU_H__ */ diff --git a/drivers/gpu/drm/etnaviv/etnaviv_iommu_v2.c b/drivers/gpu/drm/etnaviv/etnaviv_iommu_v2.c index f794e04be9e6..8b6b10354228 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_iommu_v2.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_iommu_v2.c @@ -12,7 +12,6 @@ #include "etnaviv_cmdbuf.h" #include "etnaviv_gpu.h" #include "etnaviv_mmu.h" -#include "etnaviv_iommu.h" #include "state.xml.h" #include "state_hi.xml.h" @@ -27,11 +26,9 @@ #define MMUv2_MAX_STLB_ENTRIES 1024 -struct etnaviv_iommuv2_domain { - struct etnaviv_iommu_domain base; - /* P(age) T(able) A(rray) */ - u64 *pta_cpu; - dma_addr_t pta_dma; +struct etnaviv_iommuv2_context { + struct etnaviv_iommu_context base; + unsigned short id; /* M(aster) TLB aka first level pagetable */ u32 *mtlb_cpu; dma_addr_t mtlb_dma; @@ -40,41 +37,62 @@ struct etnaviv_iommuv2_domain { dma_addr_t stlb_dma[MMUv2_MAX_STLB_ENTRIES]; }; -static struct etnaviv_iommuv2_domain * -to_etnaviv_domain(struct etnaviv_iommu_domain *domain) +static struct etnaviv_iommuv2_context * +to_v2_context(struct etnaviv_iommu_context *context) { - return container_of(domain, struct etnaviv_iommuv2_domain, base); + return container_of(context, struct etnaviv_iommuv2_context, base); } +static void etnaviv_iommuv2_free(struct etnaviv_iommu_context *context) +{ + struct etnaviv_iommuv2_context *v2_context = to_v2_context(context); + int i; + + drm_mm_takedown(&context->mm); + + for (i = 0; i < MMUv2_MAX_STLB_ENTRIES; i++) { + if (v2_context->stlb_cpu[i]) + dma_free_wc(context->global->dev, SZ_4K, + v2_context->stlb_cpu[i], + v2_context->stlb_dma[i]); + } + + dma_free_wc(context->global->dev, SZ_4K, v2_context->mtlb_cpu, + v2_context->mtlb_dma); + + clear_bit(v2_context->id, context->global->v2.pta_alloc); + + vfree(v2_context); +} static int -etnaviv_iommuv2_ensure_stlb(struct etnaviv_iommuv2_domain *etnaviv_domain, +etnaviv_iommuv2_ensure_stlb(struct etnaviv_iommuv2_context *v2_context, int stlb) { - if (etnaviv_domain->stlb_cpu[stlb]) + if (v2_context->stlb_cpu[stlb]) return 0; - etnaviv_domain->stlb_cpu[stlb] = - dma_alloc_wc(etnaviv_domain->base.dev, SZ_4K, - &etnaviv_domain->stlb_dma[stlb], + v2_context->stlb_cpu[stlb] = + dma_alloc_wc(v2_context->base.global->dev, SZ_4K, + &v2_context->stlb_dma[stlb], GFP_KERNEL); - if (!etnaviv_domain->stlb_cpu[stlb]) + if (!v2_context->stlb_cpu[stlb]) return -ENOMEM; - memset32(etnaviv_domain->stlb_cpu[stlb], MMUv2_PTE_EXCEPTION, + memset32(v2_context->stlb_cpu[stlb], MMUv2_PTE_EXCEPTION, SZ_4K / sizeof(u32)); - etnaviv_domain->mtlb_cpu[stlb] = etnaviv_domain->stlb_dma[stlb] | - MMUv2_PTE_PRESENT; + v2_context->mtlb_cpu[stlb] = + v2_context->stlb_dma[stlb] | MMUv2_PTE_PRESENT; + return 0; } -static int etnaviv_iommuv2_map(struct etnaviv_iommu_domain *domain, +static int etnaviv_iommuv2_map(struct etnaviv_iommu_context *context, unsigned long iova, phys_addr_t paddr, size_t size, int prot) { - struct etnaviv_iommuv2_domain *etnaviv_domain = - to_etnaviv_domain(domain); + struct etnaviv_iommuv2_context *v2_context = to_v2_context(context); int mtlb_entry, stlb_entry, ret; u32 entry = lower_32_bits(paddr) | MMUv2_PTE_PRESENT; @@ -90,20 +108,19 @@ static int etnaviv_iommuv2_map(struct etnaviv_iommu_domain *domain, mtlb_entry = (iova & MMUv2_MTLB_MASK) >> MMUv2_MTLB_SHIFT; stlb_entry = (iova & MMUv2_STLB_MASK) >> MMUv2_STLB_SHIFT; - ret = etnaviv_iommuv2_ensure_stlb(etnaviv_domain, mtlb_entry); + ret = etnaviv_iommuv2_ensure_stlb(v2_context, mtlb_entry); if (ret) return ret; - etnaviv_domain->stlb_cpu[mtlb_entry][stlb_entry] = entry; + v2_context->stlb_cpu[mtlb_entry][stlb_entry] = entry; return 0; } -static size_t etnaviv_iommuv2_unmap(struct etnaviv_iommu_domain *domain, +static size_t etnaviv_iommuv2_unmap(struct etnaviv_iommu_context *context, unsigned long iova, size_t size) { - struct etnaviv_iommuv2_domain *etnaviv_domain = - to_etnaviv_domain(domain); + struct etnaviv_iommuv2_context *etnaviv_domain = to_v2_context(context); int mtlb_entry, stlb_entry; if (size != SZ_4K) @@ -117,118 +134,35 @@ static size_t etnaviv_iommuv2_unmap(struct etnaviv_iommu_domain *domain, return SZ_4K; } -static int etnaviv_iommuv2_init(struct etnaviv_iommuv2_domain *etnaviv_domain) -{ - int ret; - - /* allocate scratch page */ - etnaviv_domain->base.bad_page_cpu = - dma_alloc_wc(etnaviv_domain->base.dev, SZ_4K, - &etnaviv_domain->base.bad_page_dma, - GFP_KERNEL); - if (!etnaviv_domain->base.bad_page_cpu) { - ret = -ENOMEM; - goto fail_mem; - } - - memset32(etnaviv_domain->base.bad_page_cpu, 0xdead55aa, - SZ_4K / sizeof(u32)); - - etnaviv_domain->pta_cpu = dma_alloc_wc(etnaviv_domain->base.dev, - SZ_4K, &etnaviv_domain->pta_dma, - GFP_KERNEL); - if (!etnaviv_domain->pta_cpu) { - ret = -ENOMEM; - goto fail_mem; - } - - etnaviv_domain->mtlb_cpu = dma_alloc_wc(etnaviv_domain->base.dev, - SZ_4K, &etnaviv_domain->mtlb_dma, - GFP_KERNEL); - if (!etnaviv_domain->mtlb_cpu) { - ret = -ENOMEM; - goto fail_mem; - } - - memset32(etnaviv_domain->mtlb_cpu, MMUv2_PTE_EXCEPTION, - MMUv2_MAX_STLB_ENTRIES); - - return 0; - -fail_mem: - if (etnaviv_domain->base.bad_page_cpu) - dma_free_wc(etnaviv_domain->base.dev, SZ_4K, - etnaviv_domain->base.bad_page_cpu, - etnaviv_domain->base.bad_page_dma); - - if (etnaviv_domain->pta_cpu) - dma_free_wc(etnaviv_domain->base.dev, SZ_4K, - etnaviv_domain->pta_cpu, etnaviv_domain->pta_dma); - - if (etnaviv_domain->mtlb_cpu) - dma_free_wc(etnaviv_domain->base.dev, SZ_4K, - etnaviv_domain->mtlb_cpu, etnaviv_domain->mtlb_dma); - - return ret; -} - -static void etnaviv_iommuv2_domain_free(struct etnaviv_iommu_domain *domain) -{ - struct etnaviv_iommuv2_domain *etnaviv_domain = - to_etnaviv_domain(domain); - int i; - - dma_free_wc(etnaviv_domain->base.dev, SZ_4K, - etnaviv_domain->base.bad_page_cpu, - etnaviv_domain->base.bad_page_dma); - - dma_free_wc(etnaviv_domain->base.dev, SZ_4K, - etnaviv_domain->pta_cpu, etnaviv_domain->pta_dma); - - dma_free_wc(etnaviv_domain->base.dev, SZ_4K, - etnaviv_domain->mtlb_cpu, etnaviv_domain->mtlb_dma); - - for (i = 0; i < MMUv2_MAX_STLB_ENTRIES; i++) { - if (etnaviv_domain->stlb_cpu[i]) - dma_free_wc(etnaviv_domain->base.dev, SZ_4K, - etnaviv_domain->stlb_cpu[i], - etnaviv_domain->stlb_dma[i]); - } - - vfree(etnaviv_domain); -} - -static size_t etnaviv_iommuv2_dump_size(struct etnaviv_iommu_domain *domain) +static size_t etnaviv_iommuv2_dump_size(struct etnaviv_iommu_context *context) { - struct etnaviv_iommuv2_domain *etnaviv_domain = - to_etnaviv_domain(domain); + struct etnaviv_iommuv2_context *v2_context = to_v2_context(context); size_t dump_size = SZ_4K; int i; for (i = 0; i < MMUv2_MAX_STLB_ENTRIES; i++) - if (etnaviv_domain->mtlb_cpu[i] & MMUv2_PTE_PRESENT) + if (v2_context->mtlb_cpu[i] & MMUv2_PTE_PRESENT) dump_size += SZ_4K; return dump_size; } -static void etnaviv_iommuv2_dump(struct etnaviv_iommu_domain *domain, void *buf) +static void etnaviv_iommuv2_dump(struct etnaviv_iommu_context *context, void *buf) { - struct etnaviv_iommuv2_domain *etnaviv_domain = - to_etnaviv_domain(domain); + struct etnaviv_iommuv2_context *v2_context = to_v2_context(context); int i; - memcpy(buf, etnaviv_domain->mtlb_cpu, SZ_4K); + memcpy(buf, v2_context->mtlb_cpu, SZ_4K); buf += SZ_4K; for (i = 0; i < MMUv2_MAX_STLB_ENTRIES; i++, buf += SZ_4K) - if (etnaviv_domain->mtlb_cpu[i] & MMUv2_PTE_PRESENT) - memcpy(buf, etnaviv_domain->stlb_cpu[i], SZ_4K); + if (v2_context->mtlb_cpu[i] & MMUv2_PTE_PRESENT) + memcpy(buf, v2_context->stlb_cpu[i], SZ_4K); } -static void etnaviv_iommuv2_restore_nonsec(struct etnaviv_gpu *gpu) +static void etnaviv_iommuv2_restore_nonsec(struct etnaviv_gpu *gpu, + struct etnaviv_iommu_context *context) { - struct etnaviv_iommuv2_domain *etnaviv_domain = - to_etnaviv_domain(gpu->mmu->domain); + struct etnaviv_iommuv2_context *v2_context = to_v2_context(context); u16 prefetch; /* If the MMU is already enabled the state is still there. */ @@ -236,8 +170,8 @@ static void etnaviv_iommuv2_restore_nonsec(struct etnaviv_gpu *gpu) return; prefetch = etnaviv_buffer_config_mmuv2(gpu, - (u32)etnaviv_domain->mtlb_dma, - (u32)etnaviv_domain->base.bad_page_dma); + (u32)v2_context->mtlb_dma, + (u32)context->global->bad_page_dma); etnaviv_gpu_start_fe(gpu, (u32)etnaviv_cmdbuf_get_pa(&gpu->buffer), prefetch); etnaviv_gpu_wait_idle(gpu, 100); @@ -245,10 +179,10 @@ static void etnaviv_iommuv2_restore_nonsec(struct etnaviv_gpu *gpu) gpu_write(gpu, VIVS_MMUv2_CONTROL, VIVS_MMUv2_CONTROL_ENABLE); } -static void etnaviv_iommuv2_restore_sec(struct etnaviv_gpu *gpu) +static void etnaviv_iommuv2_restore_sec(struct etnaviv_gpu *gpu, + struct etnaviv_iommu_context *context) { - struct etnaviv_iommuv2_domain *etnaviv_domain = - to_etnaviv_domain(gpu->mmu->domain); + struct etnaviv_iommuv2_context *v2_context = to_v2_context(context); u16 prefetch; /* If the MMU is already enabled the state is still there. */ @@ -256,26 +190,26 @@ static void etnaviv_iommuv2_restore_sec(struct etnaviv_gpu *gpu) return; gpu_write(gpu, VIVS_MMUv2_PTA_ADDRESS_LOW, - lower_32_bits(etnaviv_domain->pta_dma)); + lower_32_bits(context->global->v2.pta_dma)); gpu_write(gpu, VIVS_MMUv2_PTA_ADDRESS_HIGH, - upper_32_bits(etnaviv_domain->pta_dma)); + upper_32_bits(context->global->v2.pta_dma)); gpu_write(gpu, VIVS_MMUv2_PTA_CONTROL, VIVS_MMUv2_PTA_CONTROL_ENABLE); gpu_write(gpu, VIVS_MMUv2_NONSEC_SAFE_ADDR_LOW, - lower_32_bits(etnaviv_domain->base.bad_page_dma)); + lower_32_bits(context->global->bad_page_dma)); gpu_write(gpu, VIVS_MMUv2_SEC_SAFE_ADDR_LOW, - lower_32_bits(etnaviv_domain->base.bad_page_dma)); + lower_32_bits(context->global->bad_page_dma)); gpu_write(gpu, VIVS_MMUv2_SAFE_ADDRESS_CONFIG, VIVS_MMUv2_SAFE_ADDRESS_CONFIG_NON_SEC_SAFE_ADDR_HIGH( - upper_32_bits(etnaviv_domain->base.bad_page_dma)) | + upper_32_bits(context->global->bad_page_dma)) | VIVS_MMUv2_SAFE_ADDRESS_CONFIG_SEC_SAFE_ADDR_HIGH( - upper_32_bits(etnaviv_domain->base.bad_page_dma))); + upper_32_bits(context->global->bad_page_dma))); - etnaviv_domain->pta_cpu[0] = etnaviv_domain->mtlb_dma | - VIVS_MMUv2_CONFIGURATION_MODE_MODE4_K; + context->global->v2.pta_cpu[0] = v2_context->mtlb_dma | + VIVS_MMUv2_CONFIGURATION_MODE_MODE4_K; /* trigger a PTA load through the FE */ - prefetch = etnaviv_buffer_config_pta(gpu); + prefetch = etnaviv_buffer_config_pta(gpu, v2_context->id); etnaviv_gpu_start_fe(gpu, (u32)etnaviv_cmdbuf_get_pa(&gpu->buffer), prefetch); etnaviv_gpu_wait_idle(gpu, 100); @@ -283,14 +217,15 @@ static void etnaviv_iommuv2_restore_sec(struct etnaviv_gpu *gpu) gpu_write(gpu, VIVS_MMUv2_SEC_CONTROL, VIVS_MMUv2_SEC_CONTROL_ENABLE); } -void etnaviv_iommuv2_restore(struct etnaviv_gpu *gpu) +static void etnaviv_iommuv2_restore(struct etnaviv_gpu *gpu, + struct etnaviv_iommu_context *context) { switch (gpu->sec_mode) { case ETNA_SEC_NONE: - etnaviv_iommuv2_restore_nonsec(gpu); + etnaviv_iommuv2_restore_nonsec(gpu, context); break; case ETNA_SEC_KERNEL: - etnaviv_iommuv2_restore_sec(gpu); + etnaviv_iommuv2_restore_sec(gpu, context); break; default: WARN(1, "unhandled GPU security mode\n"); @@ -298,39 +233,56 @@ void etnaviv_iommuv2_restore(struct etnaviv_gpu *gpu) } } -static const struct etnaviv_iommu_domain_ops etnaviv_iommuv2_ops = { - .free = etnaviv_iommuv2_domain_free, +const struct etnaviv_iommu_ops etnaviv_iommuv2_ops = { + .free = etnaviv_iommuv2_free, .map = etnaviv_iommuv2_map, .unmap = etnaviv_iommuv2_unmap, .dump_size = etnaviv_iommuv2_dump_size, .dump = etnaviv_iommuv2_dump, + .restore = etnaviv_iommuv2_restore, }; -struct etnaviv_iommu_domain * -etnaviv_iommuv2_domain_alloc(struct etnaviv_gpu *gpu) +struct etnaviv_iommu_context * +etnaviv_iommuv2_context_alloc(struct etnaviv_iommu_global *global) { - struct etnaviv_iommuv2_domain *etnaviv_domain; - struct etnaviv_iommu_domain *domain; - int ret; + struct etnaviv_iommuv2_context *v2_context; + struct etnaviv_iommu_context *context; - etnaviv_domain = vzalloc(sizeof(*etnaviv_domain)); - if (!etnaviv_domain) + v2_context = vzalloc(sizeof(*v2_context)); + if (!v2_context) return NULL; - domain = &etnaviv_domain->base; + mutex_lock(&global->lock); + v2_context->id = find_first_zero_bit(global->v2.pta_alloc, + ETNAVIV_PTA_ENTRIES); + if (v2_context->id < ETNAVIV_PTA_ENTRIES) { + set_bit(v2_context->id, global->v2.pta_alloc); + } else { + mutex_unlock(&global->lock); + goto out_free; + } + mutex_unlock(&global->lock); - domain->dev = gpu->dev; - domain->base = SZ_4K; - domain->size = (u64)SZ_1G * 4 - SZ_4K; - domain->ops = &etnaviv_iommuv2_ops; + v2_context->mtlb_cpu = dma_alloc_wc(global->dev, SZ_4K, + &v2_context->mtlb_dma, GFP_KERNEL); + if (!v2_context->mtlb_cpu) + goto out_free_id; - ret = etnaviv_iommuv2_init(etnaviv_domain); - if (ret) - goto out_free; + memset32(v2_context->mtlb_cpu, MMUv2_PTE_EXCEPTION, + MMUv2_MAX_STLB_ENTRIES); + + context = &v2_context->base; + context->global = global; + kref_init(&context->refcount); + mutex_init(&context->lock); + INIT_LIST_HEAD(&context->mappings); + drm_mm_init(&context->mm, SZ_4K, (u64)SZ_1G * 4 - SZ_4K); - return &etnaviv_domain->base; + return context; +out_free_id: + clear_bit(v2_context->id, global->v2.pta_alloc); out_free: - vfree(etnaviv_domain); + vfree(v2_context); return NULL; } diff --git a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c index c4092c8def4f..a155755424f8 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c @@ -8,10 +8,9 @@ #include "etnaviv_drv.h" #include "etnaviv_gem.h" #include "etnaviv_gpu.h" -#include "etnaviv_iommu.h" #include "etnaviv_mmu.h" -static void etnaviv_domain_unmap(struct etnaviv_iommu_domain *domain, +static void etnaviv_context_unmap(struct etnaviv_iommu_context *context, unsigned long iova, size_t size) { size_t unmapped_page, unmapped = 0; @@ -24,7 +23,8 @@ static void etnaviv_domain_unmap(struct etnaviv_iommu_domain *domain, } while (unmapped < size) { - unmapped_page = domain->ops->unmap(domain, iova, pgsize); + unmapped_page = context->global->ops->unmap(context, iova, + pgsize); if (!unmapped_page) break; @@ -33,7 +33,7 @@ static void etnaviv_domain_unmap(struct etnaviv_iommu_domain *domain, } } -static int etnaviv_domain_map(struct etnaviv_iommu_domain *domain, +static int etnaviv_context_map(struct etnaviv_iommu_context *context, unsigned long iova, phys_addr_t paddr, size_t size, int prot) { @@ -49,7 +49,8 @@ static int etnaviv_domain_map(struct etnaviv_iommu_domain *domain, } while (size) { - ret = domain->ops->map(domain, iova, paddr, pgsize, prot); + ret = context->global->ops->map(context, iova, paddr, pgsize, + prot); if (ret) break; @@ -60,21 +61,19 @@ static int etnaviv_domain_map(struct etnaviv_iommu_domain *domain, /* unroll mapping in case something went wrong */ if (ret) - etnaviv_domain_unmap(domain, orig_iova, orig_size - size); + etnaviv_context_unmap(context, orig_iova, orig_size - size); return ret; } -static int etnaviv_iommu_map(struct etnaviv_iommu *iommu, u32 iova, +static int etnaviv_iommu_map(struct etnaviv_iommu_context *context, u32 iova, struct sg_table *sgt, unsigned len, int prot) -{ - struct etnaviv_iommu_domain *domain = iommu->domain; - struct scatterlist *sg; +{ struct scatterlist *sg; unsigned int da = iova; unsigned int i, j; int ret; - if (!domain || !sgt) + if (!context || !sgt) return -EINVAL; for_each_sg(sgt->sgl, sg, sgt->nents, i) { @@ -83,7 +82,7 @@ static int etnaviv_iommu_map(struct etnaviv_iommu *iommu, u32 iova, VERB("map[%d]: %08x %08x(%zx)", i, iova, pa, bytes); - ret = etnaviv_domain_map(domain, da, pa, bytes, prot); + ret = etnaviv_context_map(context, da, pa, bytes, prot); if (ret) goto fail; @@ -98,16 +97,15 @@ static int etnaviv_iommu_map(struct etnaviv_iommu *iommu, u32 iova, for_each_sg(sgt->sgl, sg, i, j) { size_t bytes = sg_dma_len(sg) + sg->offset; - etnaviv_domain_unmap(domain, da, bytes); + etnaviv_context_unmap(context, da, bytes); da += bytes; } return ret; } -static void etnaviv_iommu_unmap(struct etnaviv_iommu *iommu, u32 iova, +static void etnaviv_iommu_unmap(struct etnaviv_iommu_context *context, u32 iova, struct sg_table *sgt, unsigned len) { - struct etnaviv_iommu_domain *domain = iommu->domain; struct scatterlist *sg; unsigned int da = iova; int i; @@ -115,7 +113,7 @@ static void etnaviv_iommu_unmap(struct etnaviv_iommu *iommu, u32 iova, for_each_sg(sgt->sgl, sg, sgt->nents, i) { size_t bytes = sg_dma_len(sg) + sg->offset; - etnaviv_domain_unmap(domain, da, bytes); + etnaviv_context_unmap(context, da, bytes); VERB("unmap[%d]: %08x(%zx)", i, iova, bytes); @@ -125,24 +123,24 @@ static void etnaviv_iommu_unmap(struct etnaviv_iommu *iommu, u32 iova, } } -static void etnaviv_iommu_remove_mapping(struct etnaviv_iommu *mmu, +static void etnaviv_iommu_remove_mapping(struct etnaviv_iommu_context *context, struct etnaviv_vram_mapping *mapping) { struct etnaviv_gem_object *etnaviv_obj = mapping->object; - etnaviv_iommu_unmap(mmu, mapping->vram_node.start, + etnaviv_iommu_unmap(context, mapping->vram_node.start, etnaviv_obj->sgt, etnaviv_obj->base.size); drm_mm_remove_node(&mapping->vram_node); } -static int etnaviv_iommu_find_iova(struct etnaviv_iommu *mmu, +static int etnaviv_iommu_find_iova(struct etnaviv_iommu_context *context, struct drm_mm_node *node, size_t size) { struct etnaviv_vram_mapping *free = NULL; enum drm_mm_insert_mode mode = DRM_MM_INSERT_LOW; int ret; - lockdep_assert_held(&mmu->lock); + lockdep_assert_held(&context->lock); while (1) { struct etnaviv_vram_mapping *m, *n; @@ -150,17 +148,17 @@ static int etnaviv_iommu_find_iova(struct etnaviv_iommu *mmu, struct list_head list; bool found; - ret = drm_mm_insert_node_in_range(&mmu->mm, node, + ret = drm_mm_insert_node_in_range(&context->mm, node, size, 0, 0, 0, U64_MAX, mode); if (ret != -ENOSPC) break; /* Try to retire some entries */ - drm_mm_scan_init(&scan, &mmu->mm, size, 0, 0, mode); + drm_mm_scan_init(&scan, &context->mm, size, 0, 0, mode); found = 0; INIT_LIST_HEAD(&list); - list_for_each_entry(free, &mmu->mappings, mmu_node) { + list_for_each_entry(free, &context->mappings, mmu_node) { /* If this vram node has not been used, skip this. */ if (!free->vram_node.mm) continue; @@ -202,8 +200,8 @@ static int etnaviv_iommu_find_iova(struct etnaviv_iommu *mmu, * this mapping. */ list_for_each_entry_safe(m, n, &list, scan_node) { - etnaviv_iommu_remove_mapping(mmu, m); - m->mmu = NULL; + etnaviv_iommu_remove_mapping(context, m); + m->context = NULL; list_del_init(&m->mmu_node); list_del_init(&m->scan_node); } @@ -219,7 +217,7 @@ static int etnaviv_iommu_find_iova(struct etnaviv_iommu *mmu, return ret; } -int etnaviv_iommu_map_gem(struct etnaviv_iommu *mmu, +int etnaviv_iommu_map_gem(struct etnaviv_iommu_context *context, struct etnaviv_gem_object *etnaviv_obj, u32 memory_base, struct etnaviv_vram_mapping *mapping) { @@ -229,17 +227,17 @@ int etnaviv_iommu_map_gem(struct etnaviv_iommu *mmu, lockdep_assert_held(&etnaviv_obj->lock); - mutex_lock(&mmu->lock); + mutex_lock(&context->lock); /* v1 MMU can optimize single entry (contiguous) scatterlists */ - if (mmu->version == ETNAVIV_IOMMU_V1 && + if (context->global->version == ETNAVIV_IOMMU_V1 && sgt->nents == 1 && !(etnaviv_obj->flags & ETNA_BO_FORCE_MMU)) { u32 iova; iova = sg_dma_address(sgt->sgl) - memory_base; if (iova < 0x80000000 - sg_dma_len(sgt->sgl)) { mapping->iova = iova; - list_add_tail(&mapping->mmu_node, &mmu->mappings); + list_add_tail(&mapping->mmu_node, &context->mappings); ret = 0; goto unlock; } @@ -247,12 +245,12 @@ int etnaviv_iommu_map_gem(struct etnaviv_iommu *mmu, node = &mapping->vram_node; - ret = etnaviv_iommu_find_iova(mmu, node, etnaviv_obj->base.size); + ret = etnaviv_iommu_find_iova(context, node, etnaviv_obj->base.size); if (ret < 0) goto unlock; mapping->iova = node->start; - ret = etnaviv_iommu_map(mmu, node->start, sgt, etnaviv_obj->base.size, + ret = etnaviv_iommu_map(context, node->start, sgt, etnaviv_obj->base.size, ETNAVIV_PROT_READ | ETNAVIV_PROT_WRITE); if (ret < 0) { @@ -260,79 +258,58 @@ int etnaviv_iommu_map_gem(struct etnaviv_iommu *mmu, goto unlock; } - list_add_tail(&mapping->mmu_node, &mmu->mappings); - mmu->flush_seq++; + list_add_tail(&mapping->mmu_node, &context->mappings); + context->flush_seq++; unlock: - mutex_unlock(&mmu->lock); + mutex_unlock(&context->lock); return ret; } -void etnaviv_iommu_unmap_gem(struct etnaviv_iommu *mmu, +void etnaviv_iommu_unmap_gem(struct etnaviv_iommu_context *context, struct etnaviv_vram_mapping *mapping) { WARN_ON(mapping->use); - mutex_lock(&mmu->lock); + mutex_lock(&context->lock); /* If the vram node is on the mm, unmap and remove the node */ - if (mapping->vram_node.mm == &mmu->mm) - etnaviv_iommu_remove_mapping(mmu, mapping); + if (mapping->vram_node.mm == &context->mm) + etnaviv_iommu_remove_mapping(context, mapping); list_del(&mapping->mmu_node); - mmu->flush_seq++; - mutex_unlock(&mmu->lock); + context->flush_seq++; + mutex_unlock(&context->lock); } -void etnaviv_iommu_destroy(struct etnaviv_iommu *mmu) +static void etnaviv_iommu_context_free(struct kref *kref) { - drm_mm_takedown(&mmu->mm); - mmu->domain->ops->free(mmu->domain); - kfree(mmu); -} + struct etnaviv_iommu_context *context = + container_of(kref, struct etnaviv_iommu_context, refcount); -struct etnaviv_iommu *etnaviv_iommu_new(struct etnaviv_gpu *gpu) + context->global->ops->free(context); +} +void etnaviv_iommu_context_put(struct etnaviv_iommu_context *context) { - enum etnaviv_iommu_version version; - struct etnaviv_iommu *mmu; - - mmu = kzalloc(sizeof(*mmu), GFP_KERNEL); - if (!mmu) - return ERR_PTR(-ENOMEM); - - if (!(gpu->identity.minor_features1 & chipMinorFeatures1_MMU_VERSION)) { - mmu->domain = etnaviv_iommuv1_domain_alloc(gpu); - version = ETNAVIV_IOMMU_V1; - } else { - mmu->domain = etnaviv_iommuv2_domain_alloc(gpu); - version = ETNAVIV_IOMMU_V2; - } - - if (!mmu->domain) { - dev_err(gpu->dev, "Failed to allocate GPU IOMMU domain\n"); - kfree(mmu); - return ERR_PTR(-ENOMEM); - } - - mmu->gpu = gpu; - mmu->version = version; - mutex_init(&mmu->lock); - INIT_LIST_HEAD(&mmu->mappings); - - drm_mm_init(&mmu->mm, mmu->domain->base, mmu->domain->size); - - return mmu; + kref_put(&context->refcount, etnaviv_iommu_context_free); } -void etnaviv_iommu_restore(struct etnaviv_gpu *gpu) +struct etnaviv_iommu_context * +etnaviv_iommu_context_init(struct etnaviv_iommu_global *global) { - if (gpu->mmu->version == ETNAVIV_IOMMU_V1) - etnaviv_iommuv1_restore(gpu); + if (global->version == ETNAVIV_IOMMU_V1) + return etnaviv_iommuv1_context_alloc(global); else - etnaviv_iommuv2_restore(gpu); + return etnaviv_iommuv2_context_alloc(global); +} + +void etnaviv_iommu_restore(struct etnaviv_gpu *gpu, + struct etnaviv_iommu_context *context) +{ + context->global->ops->restore(gpu, context); } -int etnaviv_iommu_get_suballoc_va(struct etnaviv_iommu *mmu, +int etnaviv_iommu_get_suballoc_va(struct etnaviv_iommu_context *context, struct etnaviv_vram_mapping *mapping, u32 memory_base, dma_addr_t paddr, size_t size) @@ -340,23 +317,23 @@ int etnaviv_iommu_get_suballoc_va(struct etnaviv_iommu *mmu, struct drm_mm_node *node; int ret; - if (mmu->version == ETNAVIV_IOMMU_V1) { + if (context->global->version == ETNAVIV_IOMMU_V1) { mapping->iova = paddr - memory_base; mapping->use = 1; - list_add_tail(&mapping->mmu_node, &mmu->mappings); + list_add_tail(&mapping->mmu_node, &context->mappings); return 0; } node = &mapping->vram_node; - mutex_lock(&mmu->lock); + mutex_lock(&context->lock); - ret = etnaviv_iommu_find_iova(mmu, node, size); + ret = etnaviv_iommu_find_iova(context, node, size); if (ret < 0) goto unlock; mapping->iova = node->start; - ret = etnaviv_domain_map(mmu->domain, node->start, paddr, size, + ret = etnaviv_context_map(context, node->start, paddr, size, ETNAVIV_PROT_READ); if (ret < 0) { @@ -364,36 +341,96 @@ int etnaviv_iommu_get_suballoc_va(struct etnaviv_iommu *mmu, goto unlock; } - list_add_tail(&mapping->mmu_node, &mmu->mappings); - mmu->flush_seq++; + list_add_tail(&mapping->mmu_node, &context->mappings); + context->flush_seq++; unlock: - mutex_unlock(&mmu->lock); + mutex_unlock(&context->lock); return ret; } -void etnaviv_iommu_put_suballoc_va(struct etnaviv_iommu *mmu, +void etnaviv_iommu_put_suballoc_va(struct etnaviv_iommu_context *context, struct etnaviv_vram_mapping *mapping) { struct drm_mm_node *node = &mapping->vram_node; mapping->use = 0; - if (mmu->version == ETNAVIV_IOMMU_V1) + if (context->global->version == ETNAVIV_IOMMU_V1) return; - mutex_lock(&mmu->lock); - etnaviv_domain_unmap(mmu->domain, node->start, node->size); + mutex_lock(&context->lock); + etnaviv_context_unmap(context, node->start, node->size); drm_mm_remove_node(node); - mutex_unlock(&mmu->lock); + mutex_unlock(&context->lock); } -size_t etnaviv_iommu_dump_size(struct etnaviv_iommu *iommu) +size_t etnaviv_iommu_dump_size(struct etnaviv_iommu_context *context) { - return iommu->domain->ops->dump_size(iommu->domain); + return context->global->ops->dump_size(context); } -void etnaviv_iommu_dump(struct etnaviv_iommu *iommu, void *buf) +void etnaviv_iommu_dump(struct etnaviv_iommu_context *context, void *buf) +{ + context->global->ops->dump(context, buf); +} + +extern const struct etnaviv_iommu_ops etnaviv_iommuv1_ops; +extern const struct etnaviv_iommu_ops etnaviv_iommuv2_ops; + +struct etnaviv_iommu_global * +etnaviv_iommu_global_init(struct device *dev, + enum etnaviv_iommu_version version) { - iommu->domain->ops->dump(iommu->domain, buf); + struct etnaviv_iommu_global *global; + + global = kzalloc(sizeof(*global), GFP_KERNEL); + if (!global) + return NULL; + + global->bad_page_cpu = dma_alloc_wc(dev, SZ_4K, &global->bad_page_dma, + GFP_KERNEL); + if (!global->bad_page_cpu) + goto free_global; + + memset32(global->bad_page_cpu, 0xdead55aa, SZ_4K / sizeof(u32)); + + if (version == ETNAVIV_IOMMU_V2) { + global->v2.pta_cpu = dma_alloc_wc(dev, ETNAVIV_PTA_SIZE, + &global->v2.pta_dma, GFP_KERNEL); + if (!global->v2.pta_cpu) + goto free_bad_page; + } + + global->dev = dev; + global->version = version; + mutex_init(&global->lock); + + if (version == ETNAVIV_IOMMU_V1) + global->ops = &etnaviv_iommuv1_ops; + else + global->ops = &etnaviv_iommuv2_ops; + + return global; + +free_bad_page: + dma_free_wc(dev, SZ_4K, global->bad_page_cpu, global->bad_page_dma); +free_global: + kfree(global); + + return NULL; +} + +void etnaviv_iommu_global_fini(struct etnaviv_iommu_global *global) +{ + if (global->v2.pta_cpu) + dma_free_wc(global->dev, ETNAVIV_PTA_SIZE, + global->v2.pta_cpu, global->v2.pta_dma); + + if (global->bad_page_cpu) + dma_free_wc(global->dev, SZ_4K, + global->bad_page_cpu, global->bad_page_dma); + + mutex_destroy(&global->lock); + kfree(global); } diff --git a/drivers/gpu/drm/etnaviv/etnaviv_mmu.h b/drivers/gpu/drm/etnaviv/etnaviv_mmu.h index 34afe25df9ca..fbcefce873ca 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_mmu.h +++ b/drivers/gpu/drm/etnaviv/etnaviv_mmu.h @@ -16,33 +16,54 @@ enum etnaviv_iommu_version { struct etnaviv_gpu; struct etnaviv_vram_mapping; -struct etnaviv_iommu_domain; +struct etnaviv_iommu_global; +struct etnaviv_iommu_context; -struct etnaviv_iommu_domain_ops { - void (*free)(struct etnaviv_iommu_domain *); - int (*map)(struct etnaviv_iommu_domain *domain, unsigned long iova, +struct etnaviv_iommu_ops { + struct etnaviv_iommu_context *(*init)(struct etnaviv_iommu_global *); + void (*free)(struct etnaviv_iommu_context *); + int (*map)(struct etnaviv_iommu_context *context, unsigned long iova, phys_addr_t paddr, size_t size, int prot); - size_t (*unmap)(struct etnaviv_iommu_domain *domain, unsigned long iova, + size_t (*unmap)(struct etnaviv_iommu_context *context, unsigned long iova, size_t size); - size_t (*dump_size)(struct etnaviv_iommu_domain *); - void (*dump)(struct etnaviv_iommu_domain *, void *); + size_t (*dump_size)(struct etnaviv_iommu_context *); + void (*dump)(struct etnaviv_iommu_context *, void *); + void (*restore)(struct etnaviv_gpu *, struct etnaviv_iommu_context *); }; -struct etnaviv_iommu_domain { +#define ETNAVIV_PTA_SIZE SZ_4K +#define ETNAVIV_PTA_ENTRIES (ETNAVIV_PTA_SIZE / sizeof(u64)) + +struct etnaviv_iommu_global { struct device *dev; + enum etnaviv_iommu_version version; + const struct etnaviv_iommu_ops *ops; + struct mutex lock; + void *bad_page_cpu; dma_addr_t bad_page_dma; - u64 base; - u64 size; - const struct etnaviv_iommu_domain_ops *ops; + /* + * This union holds members needed by either MMUv1 or MMUv2, which + * can not exist at the same time. + */ + union { + struct { + struct etnaviv_iommu_context *shared_context; + } v1; + struct { + /* P(age) T(able) A(rray) */ + u64 *pta_cpu; + dma_addr_t pta_dma; + struct spinlock pta_lock; + DECLARE_BITMAP(pta_alloc, ETNAVIV_PTA_ENTRIES); + } v2; + }; }; -struct etnaviv_iommu { - struct etnaviv_gpu *gpu; - struct etnaviv_iommu_domain *domain; - - enum etnaviv_iommu_version version; +struct etnaviv_iommu_context { + struct kref refcount; + struct etnaviv_iommu_global *global; /* memory manager for GPU address area */ struct mutex lock; @@ -51,26 +72,41 @@ struct etnaviv_iommu { unsigned int flush_seq; }; +struct etnaviv_iommu_global *etnaviv_iommu_global_init(struct device *dev, + enum etnaviv_iommu_version version); +void etnaviv_iommu_global_fini(struct etnaviv_iommu_global *global); + struct etnaviv_gem_object; -int etnaviv_iommu_map_gem(struct etnaviv_iommu *mmu, +int etnaviv_iommu_map_gem(struct etnaviv_iommu_context *context, struct etnaviv_gem_object *etnaviv_obj, u32 memory_base, struct etnaviv_vram_mapping *mapping); -void etnaviv_iommu_unmap_gem(struct etnaviv_iommu *mmu, +void etnaviv_iommu_unmap_gem(struct etnaviv_iommu_context *context, struct etnaviv_vram_mapping *mapping); -int etnaviv_iommu_get_suballoc_va(struct etnaviv_iommu *mmu, +int etnaviv_iommu_get_suballoc_va(struct etnaviv_iommu_context *ctx, struct etnaviv_vram_mapping *mapping, u32 memory_base, dma_addr_t paddr, size_t size); -void etnaviv_iommu_put_suballoc_va(struct etnaviv_iommu *mmu, +void etnaviv_iommu_put_suballoc_va(struct etnaviv_iommu_context *ctx, struct etnaviv_vram_mapping *mapping); -size_t etnaviv_iommu_dump_size(struct etnaviv_iommu *iommu); -void etnaviv_iommu_dump(struct etnaviv_iommu *iommu, void *buf); - -struct etnaviv_iommu *etnaviv_iommu_new(struct etnaviv_gpu *gpu); -void etnaviv_iommu_destroy(struct etnaviv_iommu *iommu); -void etnaviv_iommu_restore(struct etnaviv_gpu *gpu); +size_t etnaviv_iommu_dump_size(struct etnaviv_iommu_context *ctx); +void etnaviv_iommu_dump(struct etnaviv_iommu_context *ctx, void *buf); + +struct etnaviv_iommu_context * +etnaviv_iommu_context_init(struct etnaviv_iommu_global *global); +static inline void etnaviv_iommu_context_get(struct etnaviv_iommu_context *ctx) +{ + kref_get(&ctx->refcount); +} +void etnaviv_iommu_context_put(struct etnaviv_iommu_context *ctx); +void etnaviv_iommu_restore(struct etnaviv_gpu *gpu, + struct etnaviv_iommu_context *ctx); + +struct etnaviv_iommu_context * +etnaviv_iommuv1_context_alloc(struct etnaviv_iommu_global *global); +struct etnaviv_iommu_context * +etnaviv_iommuv2_context_alloc(struct etnaviv_iommu_global *global); #endif /* __ETNAVIV_MMU_H__ */ From patchwork Wed Dec 19 14:45:43 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lucas Stach X-Patchwork-Id: 10737361 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E0682746 for ; Wed, 19 Dec 2018 14:45:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D01352969C for ; Wed, 19 Dec 2018 14:45:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C451B297F3; Wed, 19 Dec 2018 14:45:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 38BCF2969C for ; Wed, 19 Dec 2018 14:45:55 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id F18E56EFF9; Wed, 19 Dec 2018 14:45:51 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from metis.ext.pengutronix.de (metis.ext.pengutronix.de [IPv6:2001:67c:670:201:290:27ff:fe1d:cc33]) by gabe.freedesktop.org (Postfix) with ESMTPS id 2F1E96EFF4 for ; Wed, 19 Dec 2018 14:45:51 +0000 (UTC) Received: from dude.hi.pengutronix.de ([2001:67c:670:100:1d::7] helo=dude.pengutronix.de.) by metis.ext.pengutronix.de with esmtp (Exim 4.89) (envelope-from ) id 1gZd6Z-0006y1-E1; Wed, 19 Dec 2018 15:45:47 +0100 From: Lucas Stach To: etnaviv@lists.freedesktop.org Subject: [PATCH 07/10] drm/etnaviv: split out starting of FE idle loop Date: Wed, 19 Dec 2018 15:45:43 +0100 Message-Id: <20181219144546.28224-8-l.stach@pengutronix.de> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181219144546.28224-1-l.stach@pengutronix.de> References: <20181219144546.28224-1-l.stach@pengutronix.de> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 2001:67c:670:100:1d::7 X-SA-Exim-Mail-From: l.stach@pengutronix.de X-SA-Exim-Scanned: No (on metis.ext.pengutronix.de); SAEximRunCond expanded to false X-PTX-Original-Recipient: dri-devel@lists.freedesktop.org X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: patchwork-lst@pengutronix.de, kernel@pengutronix.de, dri-devel@lists.freedesktop.org, Russell King Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP Move buffer setup and starting of the FE loop in the kernel ringbuffer into a separate function. This is a preparation to start the FE later in the submit process. Signed-off-by: Lucas Stach --- drivers/gpu/drm/etnaviv/etnaviv_gpu.c | 26 ++++++++++++++++---------- 1 file changed, 16 insertions(+), 10 deletions(-) diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c index 287995c58134..da0164e06ee5 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c @@ -598,6 +598,20 @@ void etnaviv_gpu_start_fe(struct etnaviv_gpu *gpu, u32 address, u16 prefetch) } } +static void etnaviv_gpu_start_fe_idleloop(struct etnaviv_gpu *gpu) +{ + u32 address = etnaviv_cmdbuf_get_va(&gpu->buffer, &gpu->cmdbuf_mapping); + u16 prefetch; + + /* setup the MMU */ + etnaviv_iommu_restore(gpu, gpu->mmu); + + /* Start command processor */ + prefetch = etnaviv_buffer_init(gpu); + + etnaviv_gpu_start_fe(gpu, address, prefetch); +} + static void etnaviv_gpu_setup_pulse_eater(struct etnaviv_gpu *gpu) { /* @@ -631,8 +645,6 @@ static void etnaviv_gpu_setup_pulse_eater(struct etnaviv_gpu *gpu) static void etnaviv_gpu_hw_init(struct etnaviv_gpu *gpu) { - u16 prefetch; - if ((etnaviv_is_model_rev(gpu, GC320, 0x5007) || etnaviv_is_model_rev(gpu, GC320, 0x5220)) && gpu_read(gpu, VIVS_HI_CHIP_TIME) != 0x2062400) { @@ -678,15 +690,9 @@ static void etnaviv_gpu_hw_init(struct etnaviv_gpu *gpu) /* setup the pulse eater */ etnaviv_gpu_setup_pulse_eater(gpu); - /* setup the MMU */ - etnaviv_iommu_restore(gpu, gpu->mmu); - - /* Start command processor */ - prefetch = etnaviv_buffer_init(gpu); - gpu_write(gpu, VIVS_HI_INTR_ENBL, ~0U); - etnaviv_gpu_start_fe(gpu, etnaviv_cmdbuf_get_va(&gpu->buffer, - &gpu->cmdbuf_mapping), prefetch); + + etnaviv_gpu_start_fe_idleloop(gpu); } int etnaviv_gpu_init(struct etnaviv_drm_private *priv, struct etnaviv_gpu *gpu) From patchwork Wed Dec 19 14:45:44 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lucas Stach X-Patchwork-Id: 10737371 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 55E8B13B5 for ; Wed, 19 Dec 2018 14:46:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 455D02969C for ; Wed, 19 Dec 2018 14:46:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 39E0E29D39; Wed, 19 Dec 2018 14:46:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D4FA12969C for ; Wed, 19 Dec 2018 14:46:05 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 469C66EFFC; Wed, 19 Dec 2018 14:45:53 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from metis.ext.pengutronix.de (metis.ext.pengutronix.de [IPv6:2001:67c:670:201:290:27ff:fe1d:cc33]) by gabe.freedesktop.org (Postfix) with ESMTPS id 4D5326EFFA for ; Wed, 19 Dec 2018 14:45:51 +0000 (UTC) Received: from dude.hi.pengutronix.de ([2001:67c:670:100:1d::7] helo=dude.pengutronix.de.) by metis.ext.pengutronix.de with esmtp (Exim 4.89) (envelope-from ) id 1gZd6Z-0006y1-Ek; Wed, 19 Dec 2018 15:45:47 +0100 From: Lucas Stach To: etnaviv@lists.freedesktop.org Subject: [PATCH 08/10] drm/etnaviv: provide MMU context to etnaviv_gem_mapping_get Date: Wed, 19 Dec 2018 15:45:44 +0100 Message-Id: <20181219144546.28224-9-l.stach@pengutronix.de> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181219144546.28224-1-l.stach@pengutronix.de> References: <20181219144546.28224-1-l.stach@pengutronix.de> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 2001:67c:670:100:1d::7 X-SA-Exim-Mail-From: l.stach@pengutronix.de X-SA-Exim-Scanned: No (on metis.ext.pengutronix.de); SAEximRunCond expanded to false X-PTX-Original-Recipient: dri-devel@lists.freedesktop.org X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: patchwork-lst@pengutronix.de, kernel@pengutronix.de, dri-devel@lists.freedesktop.org, Russell King Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP In preparation to having a context per process, etnaviv_gem_mapping_get should not use the current GPU context, but needs to be told which context to use. Signed-off-by: Lucas Stach --- drivers/gpu/drm/etnaviv/etnaviv_gem.c | 20 ++++++++++++-------- drivers/gpu/drm/etnaviv/etnaviv_gem.h | 3 ++- drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c | 2 +- 3 files changed, 15 insertions(+), 10 deletions(-) diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c index c95deba6f668..b2bf125f12b9 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c @@ -259,7 +259,8 @@ void etnaviv_gem_mapping_unreference(struct etnaviv_vram_mapping *mapping) } struct etnaviv_vram_mapping *etnaviv_gem_mapping_get( - struct drm_gem_object *obj, struct etnaviv_gpu *gpu) + struct drm_gem_object *obj, struct etnaviv_gpu *gpu, + struct etnaviv_iommu_context *mmu) { struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj); struct etnaviv_vram_mapping *mapping; @@ -267,7 +268,7 @@ struct etnaviv_vram_mapping *etnaviv_gem_mapping_get( int ret = 0; mutex_lock(&etnaviv_obj->lock); - mapping = etnaviv_gem_get_vram_mapping(etnaviv_obj, gpu->mmu); + mapping = etnaviv_gem_get_vram_mapping(etnaviv_obj, mmu); if (mapping) { /* * Holding the object lock prevents the use count changing @@ -276,12 +277,12 @@ struct etnaviv_vram_mapping *etnaviv_gem_mapping_get( * the MMU owns this mapping to close this race. */ if (mapping->use == 0) { - mutex_lock(&gpu->mmu->lock); - if (mapping->context == gpu->mmu) + mutex_lock(&mmu->lock); + if (mapping->context == mmu) mapping->use += 1; else mapping = NULL; - mutex_unlock(&gpu->mmu->lock); + mutex_unlock(&mmu->lock); if (mapping) goto out; } else { @@ -314,10 +315,11 @@ struct etnaviv_vram_mapping *etnaviv_gem_mapping_get( list_del(&mapping->obj_node); } - mapping->context = gpu->mmu; + etnaviv_iommu_context_get(mmu); + mapping->context = mmu; mapping->use = 1; - ret = etnaviv_iommu_map_gem(gpu->mmu, etnaviv_obj, gpu->memory_base, + ret = etnaviv_iommu_map_gem(mmu, etnaviv_obj, gpu->memory_base, mapping); if (ret < 0) kfree(mapping); @@ -540,8 +542,10 @@ void etnaviv_gem_free_object(struct drm_gem_object *obj) WARN_ON(mapping->use); - if (context) + if (context) { etnaviv_iommu_unmap_gem(context, mapping); + etnaviv_iommu_context_put(context); + } list_del(&mapping->obj_node); kfree(mapping); diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.h b/drivers/gpu/drm/etnaviv/etnaviv_gem.h index cf94f0b24584..79a211f5ac5f 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.h +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.h @@ -123,7 +123,8 @@ struct page **etnaviv_gem_get_pages(struct etnaviv_gem_object *obj); void etnaviv_gem_put_pages(struct etnaviv_gem_object *obj); struct etnaviv_vram_mapping *etnaviv_gem_mapping_get( - struct drm_gem_object *obj, struct etnaviv_gpu *gpu); + struct drm_gem_object *obj, struct etnaviv_gpu *gpu, + struct etnaviv_iommu_context *mmu); void etnaviv_gem_mapping_reference(struct etnaviv_vram_mapping *mapping); void etnaviv_gem_mapping_unreference(struct etnaviv_vram_mapping *mapping); diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c index 989baa7275da..63b856b62098 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c @@ -229,7 +229,7 @@ static int submit_pin_objects(struct etnaviv_gem_submit *submit) struct etnaviv_vram_mapping *mapping; mapping = etnaviv_gem_mapping_get(&etnaviv_obj->base, - submit->gpu); + submit->gpu, submit->gpu->mmu); if (IS_ERR(mapping)) { ret = PTR_ERR(mapping); break; From patchwork Wed Dec 19 14:45:45 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lucas Stach X-Patchwork-Id: 10737365 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C101C13B5 for ; Wed, 19 Dec 2018 14:46:00 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B009B296AD for ; Wed, 19 Dec 2018 14:46:00 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A456929D39; Wed, 19 Dec 2018 14:46:00 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 467E0296AD for ; Wed, 19 Dec 2018 14:45:59 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 54EF26EFFA; Wed, 19 Dec 2018 14:45:52 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from metis.ext.pengutronix.de (metis.ext.pengutronix.de [IPv6:2001:67c:670:201:290:27ff:fe1d:cc33]) by gabe.freedesktop.org (Postfix) with ESMTPS id 7CF256EFFB for ; Wed, 19 Dec 2018 14:45:51 +0000 (UTC) Received: from dude.hi.pengutronix.de ([2001:67c:670:100:1d::7] helo=dude.pengutronix.de.) by metis.ext.pengutronix.de with esmtp (Exim 4.89) (envelope-from ) id 1gZd6Z-0006y1-Fa; Wed, 19 Dec 2018 15:45:47 +0100 From: Lucas Stach To: etnaviv@lists.freedesktop.org Subject: [PATCH 09/10] drm/etnaviv: implement per-process address spaces on MMUv2 Date: Wed, 19 Dec 2018 15:45:45 +0100 Message-Id: <20181219144546.28224-10-l.stach@pengutronix.de> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181219144546.28224-1-l.stach@pengutronix.de> References: <20181219144546.28224-1-l.stach@pengutronix.de> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 2001:67c:670:100:1d::7 X-SA-Exim-Mail-From: l.stach@pengutronix.de X-SA-Exim-Scanned: No (on metis.ext.pengutronix.de); SAEximRunCond expanded to false X-PTX-Original-Recipient: dri-devel@lists.freedesktop.org X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: patchwork-lst@pengutronix.de, kernel@pengutronix.de, dri-devel@lists.freedesktop.org, Russell King Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP MMUv2 allows for proper switching of the MMU context through the command stream. Use this to give each process it's own GPU address space. This provides better isolation of the graphics contexts. Signed-off-by: Lucas Stach --- drivers/gpu/drm/etnaviv/etnaviv_buffer.c | 59 +++++++++--- drivers/gpu/drm/etnaviv/etnaviv_drv.c | 15 ++- drivers/gpu/drm/etnaviv/etnaviv_drv.h | 6 +- drivers/gpu/drm/etnaviv/etnaviv_dump.c | 4 +- drivers/gpu/drm/etnaviv/etnaviv_gem.c | 5 +- drivers/gpu/drm/etnaviv/etnaviv_gem.h | 4 +- drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c | 7 +- drivers/gpu/drm/etnaviv/etnaviv_gpu.c | 98 +++++++------------- drivers/gpu/drm/etnaviv/etnaviv_gpu.h | 4 - drivers/gpu/drm/etnaviv/etnaviv_iommu.c | 10 +- drivers/gpu/drm/etnaviv/etnaviv_iommu_v2.c | 17 +++- drivers/gpu/drm/etnaviv/etnaviv_mmu.c | 44 +++++++-- drivers/gpu/drm/etnaviv/etnaviv_mmu.h | 11 ++- 13 files changed, 177 insertions(+), 107 deletions(-) diff --git a/drivers/gpu/drm/etnaviv/etnaviv_buffer.c b/drivers/gpu/drm/etnaviv/etnaviv_buffer.c index e1347630fa11..44dcd5077394 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_buffer.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_buffer.c @@ -116,7 +116,7 @@ static void etnaviv_buffer_dump(struct etnaviv_gpu *gpu, u32 *ptr = buf->vaddr + off; dev_info(gpu->dev, "virt %p phys 0x%08x free 0x%08x\n", - ptr, etnaviv_cmdbuf_get_va(buf, &gpu->cmdbuf_mapping) + + ptr, etnaviv_cmdbuf_get_va(buf, &gpu->mmu->cmdbuf_mapping) + off, size - len * 4 - off); print_hex_dump(KERN_INFO, "cmd ", DUMP_PREFIX_OFFSET, 16, 4, @@ -150,7 +150,7 @@ static u32 etnaviv_buffer_reserve(struct etnaviv_gpu *gpu, if (buffer->user_size + cmd_dwords * sizeof(u64) > buffer->size) buffer->user_size = 0; - return etnaviv_cmdbuf_get_va(buffer, &gpu->cmdbuf_mapping) + + return etnaviv_cmdbuf_get_va(buffer, &gpu->mmu->cmdbuf_mapping) + buffer->user_size; } @@ -164,7 +164,7 @@ u16 etnaviv_buffer_init(struct etnaviv_gpu *gpu) buffer->user_size = 0; CMD_WAIT(buffer); - CMD_LINK(buffer, 2, etnaviv_cmdbuf_get_va(buffer, &gpu->cmdbuf_mapping) + CMD_LINK(buffer, 2, etnaviv_cmdbuf_get_va(buffer, &gpu->mmu->cmdbuf_mapping) + buffer->user_size - 4); return buffer->user_size / 8; @@ -291,7 +291,7 @@ void etnaviv_sync_point_queue(struct etnaviv_gpu *gpu, unsigned int event) /* Append waitlink */ CMD_WAIT(buffer); - CMD_LINK(buffer, 2, etnaviv_cmdbuf_get_va(buffer, &gpu->cmdbuf_mapping) + CMD_LINK(buffer, 2, etnaviv_cmdbuf_get_va(buffer, &gpu->mmu->cmdbuf_mapping) + buffer->user_size - 4); /* @@ -306,25 +306,27 @@ void etnaviv_sync_point_queue(struct etnaviv_gpu *gpu, unsigned int event) /* Append a command buffer to the ring buffer. */ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, u32 exec_state, - unsigned int event, struct etnaviv_cmdbuf *cmdbuf) + struct etnaviv_iommu_context *mmu, unsigned int event, struct etnaviv_cmdbuf *cmdbuf) { struct etnaviv_cmdbuf *buffer = &gpu->buffer; unsigned int waitlink_offset = buffer->user_size - 16; u32 return_target, return_dwords; u32 link_target, link_dwords; bool switch_context = gpu->exec_state != exec_state; - bool need_flush = gpu->flush_seq != gpu->mmu->flush_seq; + bool switch_mmu_context = gpu->mmu != mmu; + bool need_flush = switch_mmu_context || + gpu->flush_seq != gpu->mmu->flush_seq; lockdep_assert_held(&gpu->lock); if (drm_debug & DRM_UT_DRIVER) etnaviv_buffer_dump(gpu, buffer, 0, 0x50); - link_target = etnaviv_cmdbuf_get_va(cmdbuf, &gpu->cmdbuf_mapping); + link_target = etnaviv_cmdbuf_get_va(cmdbuf, &gpu->mmu->cmdbuf_mapping); link_dwords = cmdbuf->size / 8; /* - * If we need maintanence prior to submitting this buffer, we will + * If we need maintenance prior to submitting this buffer, we will * need to append a mmu flush load state, followed by a new * link to this buffer - a total of four additional words. */ @@ -346,7 +348,24 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, u32 exec_state, if (switch_context) extra_dwords += 4; + /* PTA load command */ + if (switch_mmu_context && gpu->sec_mode == ETNA_SEC_KERNEL) + extra_dwords += 1; + target = etnaviv_buffer_reserve(gpu, buffer, extra_dwords); + /* + * Switch MMU context if necessary. Must be done after the + * link target has been calculated, as the jump forward in the + * kernel ring still uses the last active MMU context before + * the switch. + */ + if (switch_mmu_context) { + struct etnaviv_iommu_context *mmu_old = gpu->mmu; + + etnaviv_iommu_context_get(mmu); + gpu->mmu = mmu; + etnaviv_iommu_context_put(mmu_old); + } if (need_flush) { /* Add the MMU flush */ @@ -358,10 +377,23 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, u32 exec_state, VIVS_GL_FLUSH_MMU_FLUSH_PEMMU | VIVS_GL_FLUSH_MMU_FLUSH_UNK4); } else { + u32 flush = VIVS_MMUv2_CONFIGURATION_MODE_MASK | + VIVS_MMUv2_CONFIGURATION_FLUSH_FLUSH; + + if (switch_mmu_context && + gpu->sec_mode == ETNA_SEC_KERNEL) { + unsigned short id = + etnaviv_iommuv2_get_pta_id(gpu->mmu); + CMD_LOAD_STATE(buffer, + VIVS_MMUv2_PTA_CONFIG, + VIVS_MMUv2_PTA_CONFIG_INDEX(id)); + } + + if (gpu->sec_mode == ETNA_SEC_NONE) + flush |= etnaviv_iommuv2_get_mtlb_addr(gpu->mmu); + CMD_LOAD_STATE(buffer, VIVS_MMUv2_CONFIGURATION, - VIVS_MMUv2_CONFIGURATION_MODE_MASK | - VIVS_MMUv2_CONFIGURATION_ADDRESS_MASK | - VIVS_MMUv2_CONFIGURATION_FLUSH_FLUSH); + flush); CMD_SEM(buffer, SYNC_RECIPIENT_FE, SYNC_RECIPIENT_PE); CMD_STALL(buffer, SYNC_RECIPIENT_FE, @@ -377,6 +409,7 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, u32 exec_state, } /* And the link to the submitted buffer */ + link_target = etnaviv_cmdbuf_get_va(cmdbuf, &gpu->mmu->cmdbuf_mapping); CMD_LINK(buffer, link_dwords, link_target); /* Update the link target to point to above instructions */ @@ -413,13 +446,13 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, u32 exec_state, CMD_LOAD_STATE(buffer, VIVS_GL_EVENT, VIVS_GL_EVENT_EVENT_ID(event) | VIVS_GL_EVENT_FROM_PE); CMD_WAIT(buffer); - CMD_LINK(buffer, 2, etnaviv_cmdbuf_get_va(buffer, &gpu->cmdbuf_mapping) + CMD_LINK(buffer, 2, etnaviv_cmdbuf_get_va(buffer, &gpu->mmu->cmdbuf_mapping) + buffer->user_size - 4); if (drm_debug & DRM_UT_DRIVER) pr_info("stream link to 0x%08x @ 0x%08x %p\n", return_target, - etnaviv_cmdbuf_get_va(cmdbuf, &gpu->cmdbuf_mapping), + etnaviv_cmdbuf_get_va(cmdbuf, &gpu->mmu->cmdbuf_mapping), cmdbuf->vaddr); if (drm_debug & DRM_UT_DRIVER) { diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.c b/drivers/gpu/drm/etnaviv/etnaviv_drv.c index 103745d8f54c..847facedf421 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_drv.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.c @@ -41,12 +41,19 @@ static int etnaviv_open(struct drm_device *dev, struct drm_file *file) { struct etnaviv_drm_private *priv = dev->dev_private; struct etnaviv_file_private *ctx; - int i; + int ret, i; ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); if (!ctx) return -ENOMEM; + ctx->mmu = etnaviv_iommu_context_init(priv->mmu_global, + priv->cmdbuf_suballoc); + if (!ctx->mmu) { + ret = -ENOMEM; + goto out_free; + } + for (i = 0; i < ETNA_MAX_PIPES; i++) { struct etnaviv_gpu *gpu = priv->gpu[i]; struct drm_sched_rq *rq; @@ -61,6 +68,10 @@ static int etnaviv_open(struct drm_device *dev, struct drm_file *file) file->driver_priv = ctx; return 0; + +out_free: + kfree(ctx); + return ret; } static void etnaviv_postclose(struct drm_device *dev, struct drm_file *file) @@ -76,6 +87,8 @@ static void etnaviv_postclose(struct drm_device *dev, struct drm_file *file) drm_sched_entity_destroy(&ctx->sched_entity[i]); } + etnaviv_iommu_context_put(ctx->mmu); + kfree(ctx); } diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h index da933f41688f..650516f13557 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h +++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h @@ -35,10 +35,7 @@ struct etnaviv_gem_submit; struct etnaviv_iommu_global; struct etnaviv_file_private { - /* - * When per-context address spaces are supported we'd keep track of - * the context's page-tables here. - */ + struct etnaviv_iommu_context *mmu; struct drm_sched_entity sched_entity[ETNA_MAX_PIPES]; }; @@ -85,6 +82,7 @@ u16 etnaviv_buffer_config_pta(struct etnaviv_gpu *gpu, unsigned short id); void etnaviv_buffer_end(struct etnaviv_gpu *gpu); void etnaviv_sync_point_queue(struct etnaviv_gpu *gpu, unsigned int event); void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, u32 exec_state, + struct etnaviv_iommu_context *mmu, unsigned int event, struct etnaviv_cmdbuf *cmdbuf); void etnaviv_validate_init(void); bool etnaviv_cmd_validate_one(struct etnaviv_gpu *gpu, diff --git a/drivers/gpu/drm/etnaviv/etnaviv_dump.c b/drivers/gpu/drm/etnaviv/etnaviv_dump.c index b4e02063b802..d37a91008f99 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_dump.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_dump.c @@ -181,7 +181,7 @@ void etnaviv_core_dump(struct etnaviv_gpu *gpu) etnaviv_core_dump_mem(&iter, ETDUMP_BUF_RING, gpu->buffer.vaddr, gpu->buffer.size, etnaviv_cmdbuf_get_va(&gpu->buffer, - &gpu->cmdbuf_mapping)); + &gpu->mmu->cmdbuf_mapping)); spin_lock(&gpu->sched.job_list_lock); list_for_each_entry(s_job, &gpu->sched.ring_mirror_list, node) { @@ -189,7 +189,7 @@ void etnaviv_core_dump(struct etnaviv_gpu *gpu) etnaviv_core_dump_mem(&iter, ETDUMP_BUF_CMD, submit->cmdbuf.vaddr, submit->cmdbuf.size, etnaviv_cmdbuf_get_va(&submit->cmdbuf, - &gpu->cmdbuf_mapping)); + &gpu->mmu->cmdbuf_mapping)); } spin_unlock(&gpu->sched.job_list_lock); diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c index b2bf125f12b9..9231b7dd3725 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c @@ -259,8 +259,7 @@ void etnaviv_gem_mapping_unreference(struct etnaviv_vram_mapping *mapping) } struct etnaviv_vram_mapping *etnaviv_gem_mapping_get( - struct drm_gem_object *obj, struct etnaviv_gpu *gpu, - struct etnaviv_iommu_context *mmu) + struct drm_gem_object *obj, struct etnaviv_iommu_context *mmu) { struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj); struct etnaviv_vram_mapping *mapping; @@ -319,7 +318,7 @@ struct etnaviv_vram_mapping *etnaviv_gem_mapping_get( mapping->context = mmu; mapping->use = 1; - ret = etnaviv_iommu_map_gem(mmu, etnaviv_obj, gpu->memory_base, + ret = etnaviv_iommu_map_gem(mmu, etnaviv_obj, mmu->global->memory_base, mapping); if (ret < 0) kfree(mapping); diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.h b/drivers/gpu/drm/etnaviv/etnaviv_gem.h index 79a211f5ac5f..da19e37c248d 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.h +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.h @@ -97,6 +97,7 @@ struct etnaviv_gem_submit { struct kref refcount; struct etnaviv_file_private *ctx; struct etnaviv_gpu *gpu; + struct etnaviv_iommu_context *mmu; struct dma_fence *out_fence, *in_fence; int out_fence_id; struct list_head node; /* GPU active submit list */ @@ -123,8 +124,7 @@ struct page **etnaviv_gem_get_pages(struct etnaviv_gem_object *obj); void etnaviv_gem_put_pages(struct etnaviv_gem_object *obj); struct etnaviv_vram_mapping *etnaviv_gem_mapping_get( - struct drm_gem_object *obj, struct etnaviv_gpu *gpu, - struct etnaviv_iommu_context *mmu); + struct drm_gem_object *obj, struct etnaviv_iommu_context *mmu); void etnaviv_gem_mapping_reference(struct etnaviv_vram_mapping *mapping); void etnaviv_gem_mapping_unreference(struct etnaviv_vram_mapping *mapping); diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c index 63b856b62098..8754760d4602 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c @@ -229,7 +229,7 @@ static int submit_pin_objects(struct etnaviv_gem_submit *submit) struct etnaviv_vram_mapping *mapping; mapping = etnaviv_gem_mapping_get(&etnaviv_obj->base, - submit->gpu, submit->gpu->mmu); + submit->mmu); if (IS_ERR(mapping)) { ret = PTR_ERR(mapping); break; @@ -366,6 +366,9 @@ static void submit_cleanup(struct kref *kref) if (submit->cmdbuf.suballoc) etnaviv_cmdbuf_free(&submit->cmdbuf); + if (submit->mmu) + etnaviv_iommu_context_put(submit->mmu); + for (i = 0; i < submit->nr_bos; i++) { struct etnaviv_gem_object *etnaviv_obj = submit->bos[i].obj; @@ -507,6 +510,8 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data, goto err_submit_objects; submit->ctx = file->driver_priv; + etnaviv_iommu_context_get(submit->ctx->mmu); + submit->mmu = submit->ctx->mmu; submit->exec_state = args->exec_state; submit->flags = args->flags; diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c index da0164e06ee5..e022314a7781 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c @@ -600,7 +600,7 @@ void etnaviv_gpu_start_fe(struct etnaviv_gpu *gpu, u32 address, u16 prefetch) static void etnaviv_gpu_start_fe_idleloop(struct etnaviv_gpu *gpu) { - u32 address = etnaviv_cmdbuf_get_va(&gpu->buffer, &gpu->cmdbuf_mapping); + u32 address = etnaviv_cmdbuf_get_va(&gpu->buffer, &gpu->mmu->cmdbuf_mapping); u16 prefetch; /* setup the MMU */ @@ -691,8 +691,6 @@ static void etnaviv_gpu_hw_init(struct etnaviv_gpu *gpu) etnaviv_gpu_setup_pulse_eater(gpu); gpu_write(gpu, VIVS_HI_INTR_ENBL, ~0U); - - etnaviv_gpu_start_fe_idleloop(gpu); } int etnaviv_gpu_init(struct etnaviv_drm_private *priv, struct etnaviv_gpu *gpu) @@ -722,28 +720,6 @@ int etnaviv_gpu_init(struct etnaviv_drm_private *priv, struct etnaviv_gpu *gpu) goto fail; } - /* - * Set the GPU linear window to be at the end of the DMA window, where - * the CMA area is likely to reside. This ensures that we are able to - * map the command buffers while having the linear window overlap as - * much RAM as possible, so we can optimize mappings for other buffers. - * - * For 3D cores only do this if MC2.0 is present, as with MC1.0 it leads - * to different views of the memory on the individual engines. - */ - if (!(gpu->identity.features & chipFeatures_PIPE_3D) || - (gpu->identity.minor_features0 & chipMinorFeatures0_MC20)) { - u32 dma_mask = (u32)dma_get_required_mask(gpu->dev); - if (dma_mask < PHYS_OFFSET + SZ_2G) - gpu->memory_base = PHYS_OFFSET; - else - gpu->memory_base = dma_mask - SZ_2G + 1; - } else if (PHYS_OFFSET >= SZ_2G) { - dev_info(gpu->dev, "Need to move linear window on MC1.0, disabling TS\n"); - gpu->memory_base = PHYS_OFFSET; - gpu->identity.features &= ~chipFeatures_FAST_CLEAR; - } - /* * On cores with security features supported, we claim control over the * security states. @@ -771,19 +747,26 @@ int etnaviv_gpu_init(struct etnaviv_drm_private *priv, struct etnaviv_gpu *gpu) goto fail; } - gpu->mmu = etnaviv_iommu_context_init(priv->mmu_global); - if (IS_ERR(gpu->mmu)) { - dev_err(gpu->dev, "Failed to instantiate GPU IOMMU\n"); - ret = PTR_ERR(gpu->mmu); - goto fail; - } - - ret = etnaviv_cmdbuf_suballoc_map(priv->cmdbuf_suballoc, gpu->mmu, - &gpu->cmdbuf_mapping, - gpu->memory_base); - if (ret) { - dev_err(gpu->dev, "failed to map cmdbuf suballoc\n"); - goto destroy_iommu; + /* + * Set the GPU linear window to be at the end of the DMA window, where + * the CMA area is likely to reside. This ensures that we are able to + * map the command buffers while having the linear window overlap as + * much RAM as possible, so we can optimize mappings for other buffers. + * + * For 3D cores only do this if MC2.0 is present, as with MC1.0 it leads + * to different views of the memory on the individual engines. + */ + if (!(gpu->identity.features & chipFeatures_PIPE_3D) || + (gpu->identity.minor_features0 & chipMinorFeatures0_MC20)) { + u32 dma_mask = (u32)dma_get_required_mask(gpu->dev); + if (dma_mask < PHYS_OFFSET + SZ_2G) + priv->mmu_global->memory_base = PHYS_OFFSET; + else + priv->mmu_global->memory_base = dma_mask - SZ_2G + 1; + } else if (PHYS_OFFSET >= SZ_2G) { + dev_info(gpu->dev, "Need to move linear window on MC1.0, disabling TS\n"); + priv->mmu_global->memory_base = PHYS_OFFSET; + gpu->identity.features &= ~chipFeatures_FAST_CLEAR; } /* Create buffer: */ @@ -791,15 +774,7 @@ int etnaviv_gpu_init(struct etnaviv_drm_private *priv, struct etnaviv_gpu *gpu) PAGE_SIZE); if (ret) { dev_err(gpu->dev, "could not create command buffer\n"); - goto unmap_suballoc; - } - - if (mmu_version == ETNAVIV_IOMMU_V1 && - etnaviv_cmdbuf_get_va(&gpu->buffer, &gpu->cmdbuf_mapping) > 0x80000000) { - ret = -EINVAL; - dev_err(gpu->dev, - "command buffer outside valid memory window\n"); - goto free_buffer; + goto fail; } /* Setup event management */ @@ -820,14 +795,6 @@ int etnaviv_gpu_init(struct etnaviv_drm_private *priv, struct etnaviv_gpu *gpu) return 0; -free_buffer: - etnaviv_cmdbuf_free(&gpu->buffer); - gpu->buffer.suballoc = NULL; -unmap_suballoc: - etnaviv_cmdbuf_suballoc_unmap(gpu->mmu, &gpu->cmdbuf_mapping); -destroy_iommu: - etnaviv_iommu_context_put(gpu->mmu); - gpu->mmu = NULL; fail: pm_runtime_mark_last_busy(gpu->dev); pm_runtime_put_autosuspend(gpu->dev); @@ -1021,6 +988,7 @@ void etnaviv_gpu_recover_hang(struct etnaviv_gpu *gpu) etnaviv_gpu_hw_init(gpu); gpu->exec_state = -1; + gpu->mmu = NULL; mutex_unlock(&gpu->lock); pm_runtime_mark_last_busy(gpu->dev); @@ -1327,6 +1295,12 @@ struct dma_fence *etnaviv_gpu_submit(struct etnaviv_gem_submit *submit) goto out_unlock; } + if (!gpu->mmu) { + etnaviv_iommu_context_get(submit->mmu); + gpu->mmu = submit->mmu; + etnaviv_gpu_start_fe_idleloop(gpu); + } + if (submit->nr_pmrs) { gpu->event[event[1]].sync_point = &sync_point_perfmon_sample_pre; kref_get(&submit->refcount); @@ -1336,7 +1310,7 @@ struct dma_fence *etnaviv_gpu_submit(struct etnaviv_gem_submit *submit) gpu->event[event[0]].fence = gpu_fence; submit->cmdbuf.user_size = submit->cmdbuf.size - 8; - etnaviv_buffer_queue(gpu, submit->exec_state, event[0], + etnaviv_buffer_queue(gpu, submit->exec_state, submit->mmu, event[0], &submit->cmdbuf); if (submit->nr_pmrs) { @@ -1539,7 +1513,7 @@ int etnaviv_gpu_wait_idle(struct etnaviv_gpu *gpu, unsigned int timeout_ms) static int etnaviv_gpu_hw_suspend(struct etnaviv_gpu *gpu) { - if (gpu->buffer.suballoc) { + if (gpu->buffer.suballoc && gpu->mmu) { /* Replace the last WAIT with END */ mutex_lock(&gpu->lock); etnaviv_buffer_end(gpu); @@ -1551,8 +1525,13 @@ static int etnaviv_gpu_hw_suspend(struct etnaviv_gpu *gpu) * we fail, just warn and continue. */ etnaviv_gpu_wait_idle(gpu, 100); + + etnaviv_iommu_context_put(gpu->mmu); + gpu->mmu = NULL; } + gpu->exec_state = -1; + return etnaviv_gpu_clk_disable(gpu); } @@ -1568,8 +1547,6 @@ static int etnaviv_gpu_hw_resume(struct etnaviv_gpu *gpu) etnaviv_gpu_update_clock(gpu); etnaviv_gpu_hw_init(gpu); - gpu->exec_state = -1; - mutex_unlock(&gpu->lock); return 0; @@ -1701,9 +1678,6 @@ static void etnaviv_gpu_unbind(struct device *dev, struct device *master, if (gpu->buffer.suballoc) etnaviv_cmdbuf_free(&gpu->buffer); - if (gpu->cmdbuf_mapping.use == 1) - etnaviv_cmdbuf_suballoc_unmap(gpu->mmu, &gpu->cmdbuf_mapping); - if (gpu->mmu) { etnaviv_iommu_context_put(gpu->mmu); gpu->mmu = NULL; diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.h b/drivers/gpu/drm/etnaviv/etnaviv_gpu.h index 50f03ee55500..012122320b48 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.h +++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.h @@ -102,13 +102,9 @@ struct etnaviv_gpu { struct drm_gpu_scheduler sched; /* 'ring'-buffer: */ - struct etnaviv_vram_mapping cmdbuf_mapping; struct etnaviv_cmdbuf buffer; int exec_state; - /* bus base address of memory */ - u32 memory_base; - /* event management: */ DECLARE_BITMAP(event_bitmap, ETNA_NR_EVENTS); struct etnaviv_event event[ETNA_NR_EVENTS]; diff --git a/drivers/gpu/drm/etnaviv/etnaviv_iommu.c b/drivers/gpu/drm/etnaviv/etnaviv_iommu.c index c93aa0b1ad9e..afb3004535f1 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_iommu.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_iommu.c @@ -93,11 +93,11 @@ static void etnaviv_iommuv1_restore(struct etnaviv_gpu *gpu, u32 pgtable; /* set base addresses */ - gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_RA, gpu->memory_base); - gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_FE, gpu->memory_base); - gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_TX, gpu->memory_base); - gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_PEZ, gpu->memory_base); - gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_PE, gpu->memory_base); + gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_RA, context->global->memory_base); + gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_FE, context->global->memory_base); + gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_TX, context->global->memory_base); + gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_PEZ, context->global->memory_base); + gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_PE, context->global->memory_base); /* set page table address in MC */ pgtable = (u32)v1_context->pgtable_dma; diff --git a/drivers/gpu/drm/etnaviv/etnaviv_iommu_v2.c b/drivers/gpu/drm/etnaviv/etnaviv_iommu_v2.c index 8b6b10354228..73b5d6e09adf 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_iommu_v2.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_iommu_v2.c @@ -205,7 +205,7 @@ static void etnaviv_iommuv2_restore_sec(struct etnaviv_gpu *gpu, VIVS_MMUv2_SAFE_ADDRESS_CONFIG_SEC_SAFE_ADDR_HIGH( upper_32_bits(context->global->bad_page_dma))); - context->global->v2.pta_cpu[0] = v2_context->mtlb_dma | + context->global->v2.pta_cpu[v2_context->id] = v2_context->mtlb_dma | VIVS_MMUv2_CONFIGURATION_MODE_MODE4_K; /* trigger a PTA load through the FE */ @@ -217,6 +217,19 @@ static void etnaviv_iommuv2_restore_sec(struct etnaviv_gpu *gpu, gpu_write(gpu, VIVS_MMUv2_SEC_CONTROL, VIVS_MMUv2_SEC_CONTROL_ENABLE); } +u32 etnaviv_iommuv2_get_mtlb_addr(struct etnaviv_iommu_context *context) +{ + struct etnaviv_iommuv2_context *v2_context = to_v2_context(context); + + return v2_context->mtlb_dma; +} + +unsigned short etnaviv_iommuv2_get_pta_id(struct etnaviv_iommu_context *context) +{ + struct etnaviv_iommuv2_context *v2_context = to_v2_context(context); + + return v2_context->id; +} static void etnaviv_iommuv2_restore(struct etnaviv_gpu *gpu, struct etnaviv_iommu_context *context) { @@ -271,6 +284,8 @@ etnaviv_iommuv2_context_alloc(struct etnaviv_iommu_global *global) memset32(v2_context->mtlb_cpu, MMUv2_PTE_EXCEPTION, MMUv2_MAX_STLB_ENTRIES); + global->v2.pta_cpu[v2_context->id] = v2_context->mtlb_dma; + context = &v2_context->base; context->global = global; kref_init(&context->refcount); diff --git a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c index a155755424f8..99cd2f8c7b5d 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c @@ -287,6 +287,8 @@ static void etnaviv_iommu_context_free(struct kref *kref) struct etnaviv_iommu_context *context = container_of(kref, struct etnaviv_iommu_context, refcount); + etnaviv_cmdbuf_suballoc_unmap(context, &context->cmdbuf_mapping); + context->global->ops->free(context); } void etnaviv_iommu_context_put(struct etnaviv_iommu_context *context) @@ -295,12 +297,28 @@ void etnaviv_iommu_context_put(struct etnaviv_iommu_context *context) } struct etnaviv_iommu_context * -etnaviv_iommu_context_init(struct etnaviv_iommu_global *global) +etnaviv_iommu_context_init(struct etnaviv_iommu_global *global, + struct etnaviv_cmdbuf_suballoc *suballoc) { + struct etnaviv_iommu_context *ctx; + int ret; + if (global->version == ETNAVIV_IOMMU_V1) - return etnaviv_iommuv1_context_alloc(global); + ctx = etnaviv_iommuv1_context_alloc(global); else - return etnaviv_iommuv2_context_alloc(global); + ctx = etnaviv_iommuv2_context_alloc(global); + + if (!ctx) + return NULL; + + ret = etnaviv_cmdbuf_suballoc_map(suballoc, ctx, &ctx->cmdbuf_mapping, + global->memory_base); + if (ret) { + etnaviv_iommu_context_put(ctx); + return NULL; + } + + return ctx; } void etnaviv_iommu_restore(struct etnaviv_gpu *gpu, @@ -317,17 +335,24 @@ int etnaviv_iommu_get_suballoc_va(struct etnaviv_iommu_context *context, struct drm_mm_node *node; int ret; + mutex_lock(&context->lock); + + if (mapping->use > 0) { + mapping->use++; + mutex_unlock(&context->lock); + return 0; + } + if (context->global->version == ETNAVIV_IOMMU_V1) { mapping->iova = paddr - memory_base; mapping->use = 1; list_add_tail(&mapping->mmu_node, &context->mappings); + mutex_unlock(&context->lock); return 0; } node = &mapping->vram_node; - mutex_lock(&context->lock); - ret = etnaviv_iommu_find_iova(context, node, size); if (ret < 0) goto unlock; @@ -341,6 +366,7 @@ int etnaviv_iommu_get_suballoc_va(struct etnaviv_iommu_context *context, goto unlock; } + mapping->use = 1; list_add_tail(&mapping->mmu_node, &context->mappings); context->flush_seq++; unlock: @@ -354,12 +380,14 @@ void etnaviv_iommu_put_suballoc_va(struct etnaviv_iommu_context *context, { struct drm_mm_node *node = &mapping->vram_node; - mapping->use = 0; + mutex_lock(&context->lock); + mapping->use--; - if (context->global->version == ETNAVIV_IOMMU_V1) + if (mapping->use > 0 || context->global->version == ETNAVIV_IOMMU_V1) { + mutex_unlock(&context->lock); return; + } - mutex_lock(&context->lock); etnaviv_context_unmap(context, node->start, node->size); drm_mm_remove_node(node); mutex_unlock(&context->lock); diff --git a/drivers/gpu/drm/etnaviv/etnaviv_mmu.h b/drivers/gpu/drm/etnaviv/etnaviv_mmu.h index fbcefce873ca..6dcbddd34b27 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_mmu.h +++ b/drivers/gpu/drm/etnaviv/etnaviv_mmu.h @@ -43,6 +43,8 @@ struct etnaviv_iommu_global { void *bad_page_cpu; dma_addr_t bad_page_dma; + u32 memory_base; + /* * This union holds members needed by either MMUv1 or MMUv2, which * can not exist at the same time. @@ -70,6 +72,9 @@ struct etnaviv_iommu_context { struct list_head mappings; struct drm_mm mm; unsigned int flush_seq; + + /* Not part of the context, but needs to have the same lifetime */ + struct etnaviv_vram_mapping cmdbuf_mapping; }; struct etnaviv_iommu_global *etnaviv_iommu_global_init(struct device *dev, @@ -95,7 +100,8 @@ size_t etnaviv_iommu_dump_size(struct etnaviv_iommu_context *ctx); void etnaviv_iommu_dump(struct etnaviv_iommu_context *ctx, void *buf); struct etnaviv_iommu_context * -etnaviv_iommu_context_init(struct etnaviv_iommu_global *global); +etnaviv_iommu_context_init(struct etnaviv_iommu_global *global, + struct etnaviv_cmdbuf_suballoc *suballoc); static inline void etnaviv_iommu_context_get(struct etnaviv_iommu_context *ctx) { kref_get(&ctx->refcount); @@ -109,4 +115,7 @@ etnaviv_iommuv1_context_alloc(struct etnaviv_iommu_global *global); struct etnaviv_iommu_context * etnaviv_iommuv2_context_alloc(struct etnaviv_iommu_global *global); +u32 etnaviv_iommuv2_get_mtlb_addr(struct etnaviv_iommu_context *context); +unsigned short etnaviv_iommuv2_get_pta_id(struct etnaviv_iommu_context *context); + #endif /* __ETNAVIV_MMU_H__ */ From patchwork Wed Dec 19 14:45:46 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lucas Stach X-Patchwork-Id: 10737367 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A0211746 for ; Wed, 19 Dec 2018 14:46:03 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8EF13296AD for ; Wed, 19 Dec 2018 14:46:03 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 823292A111; Wed, 19 Dec 2018 14:46:03 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 14A5B296AD for ; Wed, 19 Dec 2018 14:46:03 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C278F6EFFB; Wed, 19 Dec 2018 14:45:52 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from metis.ext.pengutronix.de (metis.ext.pengutronix.de [IPv6:2001:67c:670:201:290:27ff:fe1d:cc33]) by gabe.freedesktop.org (Postfix) with ESMTPS id 4A02D6EFF9 for ; Wed, 19 Dec 2018 14:45:51 +0000 (UTC) Received: from dude.hi.pengutronix.de ([2001:67c:670:100:1d::7] helo=dude.pengutronix.de.) by metis.ext.pengutronix.de with esmtp (Exim 4.89) (envelope-from ) id 1gZd6Z-0006y1-Gu; Wed, 19 Dec 2018 15:45:47 +0100 From: Lucas Stach To: etnaviv@lists.freedesktop.org Subject: [PATCH 10/10] drm/etnaviv: dump only failing submit Date: Wed, 19 Dec 2018 15:45:46 +0100 Message-Id: <20181219144546.28224-11-l.stach@pengutronix.de> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181219144546.28224-1-l.stach@pengutronix.de> References: <20181219144546.28224-1-l.stach@pengutronix.de> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 2001:67c:670:100:1d::7 X-SA-Exim-Mail-From: l.stach@pengutronix.de X-SA-Exim-Scanned: No (on metis.ext.pengutronix.de); SAEximRunCond expanded to false X-PTX-Original-Recipient: dri-devel@lists.freedesktop.org X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: patchwork-lst@pengutronix.de, kernel@pengutronix.de, dri-devel@lists.freedesktop.org, Russell King Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP Due to the tracking provided by the scheduler we know which submit is failing. Only dump this single submit and the required auxiliary information. This cuts down the size of the devcoredumps by only including relevant information. Signed-off-by: Lucas Stach --- drivers/gpu/drm/etnaviv/etnaviv_dump.c | 63 +++++++++---------------- drivers/gpu/drm/etnaviv/etnaviv_dump.h | 4 +- drivers/gpu/drm/etnaviv/etnaviv_sched.c | 2 +- 3 files changed, 25 insertions(+), 44 deletions(-) diff --git a/drivers/gpu/drm/etnaviv/etnaviv_dump.c b/drivers/gpu/drm/etnaviv/etnaviv_dump.c index d37a91008f99..8065ac9a0536 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_dump.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_dump.c @@ -91,9 +91,9 @@ static void etnaviv_core_dump_registers(struct core_dump_iterator *iter, } static void etnaviv_core_dump_mmu(struct core_dump_iterator *iter, - struct etnaviv_gpu *gpu, size_t mmu_size) + struct etnaviv_iommu_context *mmu, size_t mmu_size) { - etnaviv_iommu_dump(gpu->mmu, iter->data); + etnaviv_iommu_dump(mmu, iter->data); etnaviv_core_dump_header(iter, ETDUMP_BUF_MMU, iter->data + mmu_size); } @@ -108,46 +108,33 @@ static void etnaviv_core_dump_mem(struct core_dump_iterator *iter, u32 type, etnaviv_core_dump_header(iter, type, iter->data + size); } -void etnaviv_core_dump(struct etnaviv_gpu *gpu) +void etnaviv_core_dump(struct etnaviv_gem_submit *submit) { + struct etnaviv_gpu *gpu = submit->gpu; struct core_dump_iterator iter; - struct etnaviv_vram_mapping *vram; struct etnaviv_gem_object *obj; - struct etnaviv_gem_submit *submit; - struct drm_sched_job *s_job; unsigned int n_obj, n_bomap_pages; size_t file_size, mmu_size; __le64 *bomap, *bomap_start; + int i; /* Only catch the first event, or when manually re-armed */ if (!etnaviv_dump_core) return; etnaviv_dump_core = false; - mmu_size = etnaviv_iommu_dump_size(gpu->mmu); + mmu_size = etnaviv_iommu_dump_size(submit->mmu); - /* We always dump registers, mmu, ring and end marker */ - n_obj = 4; + /* We always dump registers, mmu, ring hanging cmdbuf and end marker */ + n_obj = 5; n_bomap_pages = 0; file_size = ARRAY_SIZE(etnaviv_dump_registers) * sizeof(struct etnaviv_dump_registers) + - mmu_size + gpu->buffer.size; - - /* Add in the active command buffers */ - spin_lock(&gpu->sched.job_list_lock); - list_for_each_entry(s_job, &gpu->sched.ring_mirror_list, node) { - submit = to_etnaviv_submit(s_job); - file_size += submit->cmdbuf.size; - n_obj++; - } - spin_unlock(&gpu->sched.job_list_lock); + mmu_size + gpu->buffer.size + submit->cmdbuf.size; /* Add in the active buffer objects */ - list_for_each_entry(vram, &gpu->mmu->mappings, mmu_node) { - if (!vram->use) - continue; - - obj = vram->object; + for (i = 0; i < submit->nr_bos; i++) { + obj = submit->bos[i].obj; file_size += obj->base.size; n_bomap_pages += obj->base.size >> PAGE_SHIFT; n_obj++; @@ -177,21 +164,16 @@ void etnaviv_core_dump(struct etnaviv_gpu *gpu) memset(iter.hdr, 0, iter.data - iter.start); etnaviv_core_dump_registers(&iter, gpu); - etnaviv_core_dump_mmu(&iter, gpu, mmu_size); + etnaviv_core_dump_mmu(&iter, submit->mmu, mmu_size); etnaviv_core_dump_mem(&iter, ETDUMP_BUF_RING, gpu->buffer.vaddr, gpu->buffer.size, etnaviv_cmdbuf_get_va(&gpu->buffer, - &gpu->mmu->cmdbuf_mapping)); - - spin_lock(&gpu->sched.job_list_lock); - list_for_each_entry(s_job, &gpu->sched.ring_mirror_list, node) { - submit = to_etnaviv_submit(s_job); - etnaviv_core_dump_mem(&iter, ETDUMP_BUF_CMD, - submit->cmdbuf.vaddr, submit->cmdbuf.size, - etnaviv_cmdbuf_get_va(&submit->cmdbuf, - &gpu->mmu->cmdbuf_mapping)); - } - spin_unlock(&gpu->sched.job_list_lock); + &submit->mmu->cmdbuf_mapping)); + + etnaviv_core_dump_mem(&iter, ETDUMP_BUF_CMD, + submit->cmdbuf.vaddr, submit->cmdbuf.size, + etnaviv_cmdbuf_get_va(&submit->cmdbuf, + &submit->mmu->cmdbuf_mapping)); /* Reserve space for the bomap */ if (n_bomap_pages) { @@ -204,14 +186,13 @@ void etnaviv_core_dump(struct etnaviv_gpu *gpu) bomap_start = bomap = NULL; } - list_for_each_entry(vram, &gpu->mmu->mappings, mmu_node) { + for (i = 0; i < submit->nr_bos; i++) { + struct etnaviv_vram_mapping *vram; struct page **pages; void *vaddr; - if (vram->use == 0) - continue; - - obj = vram->object; + obj = submit->bos[i].obj; + vram = submit->bos[i].mapping; mutex_lock(&obj->lock); pages = etnaviv_gem_get_pages(obj); diff --git a/drivers/gpu/drm/etnaviv/etnaviv_dump.h b/drivers/gpu/drm/etnaviv/etnaviv_dump.h index 2d916c2667ee..a125c46b895b 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_dump.h +++ b/drivers/gpu/drm/etnaviv/etnaviv_dump.h @@ -35,8 +35,8 @@ struct etnaviv_dump_registers { }; #ifdef __KERNEL__ -struct etnaviv_gpu; -void etnaviv_core_dump(struct etnaviv_gpu *gpu); +struct etnaviv_gem_submit; +void etnaviv_core_dump(struct etnaviv_gem_submit *submit); #endif #endif diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c b/drivers/gpu/drm/etnaviv/etnaviv_sched.c index 1bb076f8b2b7..2d6508d5a2d2 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c @@ -115,7 +115,7 @@ static void etnaviv_sched_timedout_job(struct drm_sched_job *sched_job) drm_sched_hw_job_reset(&gpu->sched, sched_job); /* get the GPU back into the init state */ - etnaviv_core_dump(gpu); + etnaviv_core_dump(submit); etnaviv_gpu_recover_hang(gpu); /* restart scheduler after GPU is usable again */