From patchwork Mon Sep 14 22:40:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jordan Crouse X-Patchwork-Id: 11775089 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CC33E746 for ; Mon, 14 Sep 2020 22:41:17 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9E0AA208DB for ; Mon, 14 Sep 2020 22:41:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=mg.codeaurora.org header.i=@mg.codeaurora.org header.b="mWd4Mi0H" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9E0AA208DB Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A54D889C2A; Mon, 14 Sep 2020 22:41:16 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail29.static.mailgun.info (mail29.static.mailgun.info [104.130.122.29]) by gabe.freedesktop.org (Postfix) with ESMTPS id 5DF6289C2A for ; Mon, 14 Sep 2020 22:41:09 +0000 (UTC) DKIM-Signature: a=rsa-sha256; v=1; c=relaxed/relaxed; d=mg.codeaurora.org; q=dns/txt; s=smtp; t=1600123274; h=Content-Transfer-Encoding: MIME-Version: References: In-Reply-To: Message-Id: Date: Subject: Cc: To: From: Sender; bh=vAqZuoJJYBV62K9GWWp0Esg+GtLJl3AHKszDdXGA5sA=; b=mWd4Mi0Hm1SkGWSjfbWpcuuf+YARlCjQ97exjU/MFgtL5H9bNqacjxFncYQsZLs0qXdAIR7o 3Phwv83y7tFh3/1kkzs2EFtobpiNmDjy2zJbC2/IMvnSrkCX7A6eIDBBCKtR1PPAgczRLULC WuOWpF/Gbofl4me542jtc6puVKc= X-Mailgun-Sending-Ip: 104.130.122.29 X-Mailgun-Sid: WyJkOTU5ZSIsICJkcmktZGV2ZWxAbGlzdHMuZnJlZWRlc2t0b3Aub3JnIiwgImJlOWU0YSJd Received: from smtp.codeaurora.org (ec2-35-166-182-171.us-west-2.compute.amazonaws.com [35.166.182.171]) by smtp-out-n02.prod.us-east-1.postgun.com with SMTP id 5f5ff1669f3347551f878cb5 (version=TLS1.2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256); Mon, 14 Sep 2020 22:40:38 GMT Received: by smtp.codeaurora.org (Postfix, from userid 1001) id 3001CC43391; Mon, 14 Sep 2020 22:40:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-caf-mail-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=ALL_TRUSTED, BAYES_00, SPF_FAIL, URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from jordan-laptop.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: jcrouse) by smtp.codeaurora.org (Postfix) with ESMTPSA id 75E39C433F1; Mon, 14 Sep 2020 22:40:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org 75E39C433F1 Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; spf=fail smtp.mailfrom=jcrouse@codeaurora.org From: Jordan Crouse To: linux-arm-msm@vger.kernel.org Subject: [PATCH 1/3] drm/msm: Allow a5xx to mark the RPTR shadow as privileged Date: Mon, 14 Sep 2020 16:40:21 -0600 Message-Id: <20200914224023.1495082-2-jcrouse@codeaurora.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200914224023.1495082-1-jcrouse@codeaurora.org> References: <20200914224023.1495082-1-jcrouse@codeaurora.org> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Wambui Karuga , Jonathan Marek , David Airlie , Greg Kroah-Hartman , freedreno@lists.freedesktop.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, Bjorn Andersson , Emil Velikov , Ben Dooks , AngeloGioacchino Del Regno , Sean Paul , Brian Masney Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Newer microcode versions have support for the CP_WHERE_AM_I opcode which allows the RPTR shadow memory to be marked as privileged to protect it from corruption. Move the RPTR shadow into its own buffer and protect it it if the current microcode version supports the new feature. We can also re-enable preemption for those targets that support CP_WHERE_AM_I. Start out by preemptively assuming that we can enable preemption and disable it in a5xx_hw_init if the microcode version comes back as too old. Signed-off-by: Jordan Crouse --- drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 96 ++++++++++++++++++--- drivers/gpu/drm/msm/adreno/a5xx_gpu.h | 12 +++ drivers/gpu/drm/msm/adreno/a5xx_power.c | 2 +- drivers/gpu/drm/msm/adreno/a5xx_preempt.c | 5 +- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 5 ++ drivers/gpu/drm/msm/adreno/adreno_pm4.xml.h | 1 + drivers/gpu/drm/msm/msm_gpu.h | 1 + 7 files changed, 109 insertions(+), 13 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c index 616d9e798058..835aaef72b00 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c @@ -18,13 +18,24 @@ static void a5xx_dump(struct msm_gpu *gpu); #define GPU_PAS_ID 13 -static void a5xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring) +void a5xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring, + bool sync) { struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); struct a5xx_gpu *a5xx_gpu = to_a5xx_gpu(adreno_gpu); uint32_t wptr; unsigned long flags; + /* + * Most flush operations need to issue a WHERE_AM_I opcode to sync up + * the rptr shadow + */ + if (a5xx_gpu->has_whereami && sync) { + OUT_PKT7(ring, CP_WHERE_AM_I, 2); + OUT_RING(ring, lower_32_bits(shadowptr(a5xx_gpu, ring))); + OUT_RING(ring, upper_32_bits(shadowptr(a5xx_gpu, ring))); + } + spin_lock_irqsave(&ring->lock, flags); /* Copy the shadow to the actual register */ @@ -90,7 +101,7 @@ static void a5xx_submit_in_rb(struct msm_gpu *gpu, struct msm_gem_submit *submit } } - a5xx_flush(gpu, ring); + a5xx_flush(gpu, ring, true); a5xx_preempt_trigger(gpu); /* we might not necessarily have a cmd from userspace to @@ -204,7 +215,8 @@ static void a5xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) /* Set bit 0 to trigger an interrupt on preempt complete */ OUT_RING(ring, 0x01); - a5xx_flush(gpu, ring); + /* A WHERE_AM_I packet is not needed after a YIELD */ + a5xx_flush(gpu, ring, false); /* Check to see if we need to start preemption */ a5xx_preempt_trigger(gpu); @@ -363,7 +375,7 @@ static int a5xx_me_init(struct msm_gpu *gpu) OUT_RING(ring, 0x00000000); OUT_RING(ring, 0x00000000); - gpu->funcs->flush(gpu, ring); + a5xx_flush(gpu, ring, true); return a5xx_idle(gpu, ring) ? 0 : -EINVAL; } @@ -405,11 +417,31 @@ static int a5xx_preempt_start(struct msm_gpu *gpu) OUT_RING(ring, 0x01); OUT_RING(ring, 0x01); - gpu->funcs->flush(gpu, ring); + /* The WHERE_AMI_I packet is not needed after a YIELD is issued */ + a5xx_flush(gpu, ring, false); return a5xx_idle(gpu, ring) ? 0 : -EINVAL; } +static void a5xx_ucode_check_version(struct a5xx_gpu *a5xx_gpu, + struct drm_gem_object *obj) +{ + u32 *buf = msm_gem_get_vaddr_active(obj); + + if (IS_ERR(buf)) + return; + + /* + * If the lowest nibble is 0xa that is an indication that this microcode + * has been patched. The actual version is in dword [3] but we only care + * about the patchlevel which is the lowest nibble of dword [3] + */ + if (((buf[0] & 0xf) == 0xa) && (buf[2] & 0xf) >= 1) + a5xx_gpu->has_whereami = true; + + msm_gem_put_vaddr(obj); +} + static int a5xx_ucode_init(struct msm_gpu *gpu) { struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); @@ -445,6 +477,7 @@ static int a5xx_ucode_init(struct msm_gpu *gpu) } msm_gem_object_set_name(a5xx_gpu->pfp_bo, "pfpfw"); + a5xx_ucode_check_version(a5xx_gpu, a5xx_gpu->pfp_bo); } gpu_write64(gpu, REG_A5XX_CP_ME_INSTR_BASE_LO, @@ -504,6 +537,7 @@ static int a5xx_zap_shader_init(struct msm_gpu *gpu) static int a5xx_hw_init(struct msm_gpu *gpu) { struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); + struct a5xx_gpu *a5xx_gpu = to_a5xx_gpu(adreno_gpu); int ret; gpu_write(gpu, REG_A5XX_VBIF_ROUND_ROBIN_QOS_ARB, 0x00000003); @@ -712,9 +746,36 @@ static int a5xx_hw_init(struct msm_gpu *gpu) gpu_write64(gpu, REG_A5XX_CP_RB_BASE, REG_A5XX_CP_RB_BASE_HI, gpu->rb[0]->iova); + /* + * If the microcode supports the WHERE_AM_I opcode then we can use that + * in lieu of the RPTR shadow and enable preemption. Otherwise, we + * can't safely use the RPTR shadow or preemption. In either case, the + * RPTR shadow should be disabled in hardware. + */ gpu_write(gpu, REG_A5XX_CP_RB_CNTL, MSM_GPU_RB_CNTL_DEFAULT | AXXX_CP_RB_CNTL_NO_UPDATE); + /* Disable preemption if WHERE_AM_I isn't available */ + if (!a5xx_gpu->has_whereami && gpu->nr_rings > 1) { + a5xx_preempt_fini(gpu); + gpu->nr_rings = 1; + } else { + /* Create a privileged buffer for the RPTR shadow */ + if (!a5xx_gpu->shadow_bo) { + a5xx_gpu->shadow = msm_gem_kernel_new(gpu->dev, + sizeof(u32) * gpu->nr_rings, + MSM_BO_UNCACHED | MSM_BO_MAP_PRIV, + gpu->aspace, &a5xx_gpu->shadow_bo, + &a5xx_gpu->shadow_iova); + + if (IS_ERR(a5xx_gpu->shadow)) + return PTR_ERR(a5xx_gpu->shadow); + } + + gpu_write64(gpu, REG_A5XX_CP_RB_RPTR_ADDR, + REG_A5XX_CP_RB_RPTR_ADDR_HI, shadowptr(a5xx_gpu, gpu->rb[0])); + } + a5xx_preempt_hw_init(gpu); /* Disable the interrupts through the initial bringup stage */ @@ -738,7 +799,7 @@ static int a5xx_hw_init(struct msm_gpu *gpu) OUT_PKT7(gpu->rb[0], CP_EVENT_WRITE, 1); OUT_RING(gpu->rb[0], CP_EVENT_WRITE_0_EVENT(STAT_EVENT)); - gpu->funcs->flush(gpu, gpu->rb[0]); + a5xx_flush(gpu, gpu->rb[0], true); if (!a5xx_idle(gpu, gpu->rb[0])) return -EINVAL; } @@ -756,7 +817,7 @@ static int a5xx_hw_init(struct msm_gpu *gpu) OUT_PKT7(gpu->rb[0], CP_SET_SECURE_MODE, 1); OUT_RING(gpu->rb[0], 0x00000000); - gpu->funcs->flush(gpu, gpu->rb[0]); + a5xx_flush(gpu, gpu->rb[0], true); if (!a5xx_idle(gpu, gpu->rb[0])) return -EINVAL; } else if (ret == -ENODEV) { @@ -823,6 +884,11 @@ static void a5xx_destroy(struct msm_gpu *gpu) drm_gem_object_put(a5xx_gpu->gpmu_bo); } + if (a5xx_gpu->shadow_bo) { + msm_gem_unpin_iova(a5xx_gpu->shadow_bo, gpu->aspace); + drm_gem_object_put(a5xx_gpu->shadow_bo); + } + adreno_gpu_cleanup(adreno_gpu); kfree(a5xx_gpu); } @@ -1430,6 +1496,17 @@ static unsigned long a5xx_gpu_busy(struct msm_gpu *gpu) return (unsigned long)busy_time; } +static uint32_t a5xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring) +{ + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); + struct a5xx_gpu *a5xx_gpu = to_a5xx_gpu(adreno_gpu); + + if (a5xx_gpu->has_whereami) + return a5xx_gpu->shadow[ring->id]; + + return ring->memptrs->rptr = gpu_read(gpu, REG_A5XX_CP_RB_RPTR); +} + static const struct adreno_gpu_funcs funcs = { .base = { .get_param = adreno_get_param, @@ -1438,7 +1515,6 @@ static const struct adreno_gpu_funcs funcs = { .pm_resume = a5xx_pm_resume, .recover = a5xx_recover, .submit = a5xx_submit, - .flush = a5xx_flush, .active_ring = a5xx_active_ring, .irq = a5xx_irq, .destroy = a5xx_destroy, @@ -1452,6 +1528,7 @@ static const struct adreno_gpu_funcs funcs = { .gpu_state_get = a5xx_gpu_state_get, .gpu_state_put = a5xx_gpu_state_put, .create_address_space = adreno_iommu_create_address_space, + .get_rptr = a5xx_get_rptr, }, .get_timestamp = a5xx_get_timestamp, }; @@ -1516,8 +1593,7 @@ struct msm_gpu *a5xx_gpu_init(struct drm_device *dev) check_speed_bin(&pdev->dev); - /* Restricting nr_rings to 1 to temporarily disable preemption */ - ret = adreno_gpu_init(dev, pdev, adreno_gpu, &funcs, 1); + ret = adreno_gpu_init(dev, pdev, adreno_gpu, &funcs, 4); if (ret) { a5xx_destroy(&(a5xx_gpu->base.base)); return ERR_PTR(ret); diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.h b/drivers/gpu/drm/msm/adreno/a5xx_gpu.h index 1e5b1a15a70f..c7187bcc5e90 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.h +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.h @@ -37,6 +37,13 @@ struct a5xx_gpu { atomic_t preempt_state; struct timer_list preempt_timer; + + struct drm_gem_object *shadow_bo; + uint64_t shadow_iova; + uint32_t *shadow; + + /* True if the microcode supports the WHERE_AM_I opcode */ + bool has_whereami; }; #define to_a5xx_gpu(x) container_of(x, struct a5xx_gpu, base) @@ -141,6 +148,9 @@ static inline int spin_usecs(struct msm_gpu *gpu, uint32_t usecs, return -ETIMEDOUT; } +#define shadowptr(a5xx_gpu, ring) ((a5xx_gpu)->shadow_iova + \ + ((ring)->id * sizeof(uint32_t))) + bool a5xx_idle(struct msm_gpu *gpu, struct msm_ringbuffer *ring); void a5xx_set_hwcg(struct msm_gpu *gpu, bool state); @@ -150,6 +160,8 @@ void a5xx_preempt_trigger(struct msm_gpu *gpu); void a5xx_preempt_irq(struct msm_gpu *gpu); void a5xx_preempt_fini(struct msm_gpu *gpu); +void a5xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring, bool sync); + /* Return true if we are in a preempt state */ static inline bool a5xx_in_preempt(struct a5xx_gpu *a5xx_gpu) { diff --git a/drivers/gpu/drm/msm/adreno/a5xx_power.c b/drivers/gpu/drm/msm/adreno/a5xx_power.c index 321a8061fd32..f176a6f3eff6 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_power.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_power.c @@ -240,7 +240,7 @@ static int a5xx_gpmu_init(struct msm_gpu *gpu) OUT_PKT7(ring, CP_SET_PROTECTED_MODE, 1); OUT_RING(ring, 1); - gpu->funcs->flush(gpu, ring); + a5xx_flush(gpu, ring, true); if (!a5xx_idle(gpu, ring)) { DRM_ERROR("%s: Unable to load GPMU firmware. GPMU will not be active\n", diff --git a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c index 9f3fe177b00e..7e04509c4e1f 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c @@ -259,8 +259,9 @@ static int preempt_init_ring(struct a5xx_gpu *a5xx_gpu, ptr->magic = A5XX_PREEMPT_RECORD_MAGIC; ptr->info = 0; ptr->data = 0; - ptr->cntl = MSM_GPU_RB_CNTL_DEFAULT; - ptr->rptr_addr = rbmemptr(ring, rptr); + ptr->cntl = MSM_GPU_RB_CNTL_DEFAULT | AXXX_CP_RB_CNTL_NO_UPDATE; + + ptr->rptr_addr = shadowptr(a5xx_gpu, ring); ptr->counter = counters_iova; return 0; diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index a833dd0ab751..11635e39ca19 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -422,6 +422,11 @@ int adreno_hw_init(struct msm_gpu *gpu) static uint32_t get_rptr(struct adreno_gpu *adreno_gpu, struct msm_ringbuffer *ring) { + struct msm_gpu *gpu = &adreno_gpu->base; + + if (gpu->funcs->get_rptr) + return gpu->funcs->get_rptr(gpu, ring); + return ring->memptrs->rptr = adreno_gpu_read( adreno_gpu, REG_ADRENO_CP_RB_RPTR); } diff --git a/drivers/gpu/drm/msm/adreno/adreno_pm4.xml.h b/drivers/gpu/drm/msm/adreno/adreno_pm4.xml.h index 3931eecadaff..59bb8c1ffce6 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_pm4.xml.h +++ b/drivers/gpu/drm/msm/adreno/adreno_pm4.xml.h @@ -298,6 +298,7 @@ enum adreno_pm4_type3_packets { CP_SET_BIN_DATA5_OFFSET = 46, CP_SET_CTXSWITCH_IB = 85, CP_REG_WRITE = 109, + CP_WHERE_AM_I = 98, }; enum adreno_state_block { diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 5ee358b480e6..6c9e1fdc1a76 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -68,6 +68,7 @@ struct msm_gpu_funcs { (struct msm_gpu *gpu, struct platform_device *pdev); struct msm_gem_address_space *(*create_private_address_space) (struct msm_gpu *gpu); + uint32_t (*get_rptr)(struct msm_gpu *gpu, struct msm_ringbuffer *ring); }; struct msm_gpu { From patchwork Mon Sep 14 22:40:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jordan Crouse X-Patchwork-Id: 11775081 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 676F4746 for ; Mon, 14 Sep 2020 22:40:57 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3A44C206A5 for ; Mon, 14 Sep 2020 22:40:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=mg.codeaurora.org header.i=@mg.codeaurora.org header.b="PWZ4CFgu" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3A44C206A5 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 0F1856E7F5; Mon, 14 Sep 2020 22:40:52 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail29.static.mailgun.info (mail29.static.mailgun.info [104.130.122.29]) by gabe.freedesktop.org (Postfix) with ESMTPS id 7CF286E7EF for ; Mon, 14 Sep 2020 22:40:47 +0000 (UTC) DKIM-Signature: a=rsa-sha256; v=1; c=relaxed/relaxed; d=mg.codeaurora.org; q=dns/txt; s=smtp; t=1600123248; h=Content-Transfer-Encoding: MIME-Version: References: In-Reply-To: Message-Id: Date: Subject: Cc: To: From: Sender; bh=7ecByI6SXnqe8zENVQW0SIp50PVYEiS7iVo98wrfseQ=; b=PWZ4CFguVvfcEnwLx17bpeaGXS8MIZ2i5anNApabVKSv6HowGfqpb3hKsCXi2nN6Yu5mi1Rf Q7stglBSCIUraYXeA6Idk2AfivDJKgx0ECPM9nYIBX1sI/daBl+8NZyTLid+hxfSh6LU2xf1 GC1OfnCUzGFiHEUdbOiO2Mn0RH4= X-Mailgun-Sending-Ip: 104.130.122.29 X-Mailgun-Sid: WyJkOTU5ZSIsICJkcmktZGV2ZWxAbGlzdHMuZnJlZWRlc2t0b3Aub3JnIiwgImJlOWU0YSJd Received: from smtp.codeaurora.org (ec2-35-166-182-171.us-west-2.compute.amazonaws.com [35.166.182.171]) by smtp-out-n07.prod.us-east-1.postgun.com with SMTP id 5f5ff1669f3347551f878e25 (version=TLS1.2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256); Mon, 14 Sep 2020 22:40:38 GMT Received: by smtp.codeaurora.org (Postfix, from userid 1001) id 0C690C433A1; Mon, 14 Sep 2020 22:40:38 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-caf-mail-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=ALL_TRUSTED, BAYES_00, SPF_FAIL, URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from jordan-laptop.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: jcrouse) by smtp.codeaurora.org (Postfix) with ESMTPSA id 2FCE4C433CA; Mon, 14 Sep 2020 22:40:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org 2FCE4C433CA Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; spf=fail smtp.mailfrom=jcrouse@codeaurora.org From: Jordan Crouse To: linux-arm-msm@vger.kernel.org Subject: [PATCH 2/3] drm/msm: a6xx: Use WHERE_AM_I for eligible targets Date: Mon, 14 Sep 2020 16:40:22 -0600 Message-Id: <20200914224023.1495082-3-jcrouse@codeaurora.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200914224023.1495082-1-jcrouse@codeaurora.org> References: <20200914224023.1495082-1-jcrouse@codeaurora.org> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Jonathan Marek , David Airlie , freedreno@lists.freedesktop.org, Sharat Masetty , Akhil P Oommen , dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Sean Paul Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Support the WHERE_AM_I opcode for the A618, A630 and A640 GPUs if the microcode supports it. The WHERE_AM_I opcode allows the RPTR shadow to be updated in priviliged memory which protects the shadow from being read or written from user submissions. A650 already supports extended APRIV have built in hardware support for to access privilged memory from the CP and can go back to using the hardware RPTR shadow feature. Signed-off-by: Jordan Crouse --- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 87 ++++++++++++++++++++++++++- drivers/gpu/drm/msm/adreno/a6xx_gpu.h | 9 +++ 2 files changed, 93 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index a3a8d6fd06bb..9cce2b01b1a7 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -51,9 +51,20 @@ bool a6xx_idle(struct msm_gpu *gpu, struct msm_ringbuffer *ring) static void a6xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring) { + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); uint32_t wptr; unsigned long flags; + /* Expanded APRIV doesn't need to issue the WHERE_AM_I opcode */ + if (a6xx_gpu->has_whereami && !adreno_gpu->base.hw_apriv) { + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); + + OUT_PKT7(ring, CP_WHERE_AM_I, 2); + OUT_RING(ring, lower_32_bits(shadowptr(a6xx_gpu, ring))); + OUT_RING(ring, upper_32_bits(shadowptr(a6xx_gpu, ring))); + } + spin_lock_irqsave(&ring->lock, flags); /* Copy the shadow to the actual register */ @@ -508,6 +519,30 @@ static int a6xx_cp_init(struct msm_gpu *gpu) return a6xx_idle(gpu, ring) ? 0 : -EINVAL; } +static void a6xx_ucode_check_version(struct a6xx_gpu *a6xx_gpu, + struct drm_gem_object *obj) +{ + u32 *buf = msm_gem_get_vaddr_active(obj); + + if (IS_ERR(buf)) + return; + + /* + * If the lowest nibble is 0xa that is an indication that this microcode + * has been patched. The actual version is in dword [3] but we only care + * about the patchlevel which is the lowest nibble of dword [3] + * + * Otherwise check that the firmware is greater than or equal to 1.90 + * which was the first version that had this fix built in + */ + if (((buf[0] & 0xf) == 0xa) && (buf[2] & 0xf) >= 1) + a6xx_gpu->has_whereami = true; + else if ((buf[0] & 0xfff) > 0x190) + a6xx_gpu->has_whereami = true; + + msm_gem_put_vaddr(obj); +} + static int a6xx_ucode_init(struct msm_gpu *gpu) { struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); @@ -528,6 +563,7 @@ static int a6xx_ucode_init(struct msm_gpu *gpu) } msm_gem_object_set_name(a6xx_gpu->sqe_bo, "sqefw"); + a6xx_ucode_check_version(a6xx_gpu, a6xx_gpu->sqe_bo); } gpu_write64(gpu, REG_A6XX_CP_SQE_INSTR_BASE_LO, @@ -743,8 +779,37 @@ static int a6xx_hw_init(struct msm_gpu *gpu) gpu_write64(gpu, REG_A6XX_CP_RB_BASE, REG_A6XX_CP_RB_BASE_HI, gpu->rb[0]->iova); - gpu_write(gpu, REG_A6XX_CP_RB_CNTL, - MSM_GPU_RB_CNTL_DEFAULT | AXXX_CP_RB_CNTL_NO_UPDATE); + /* Targets that support extended APRIV can use the RPTR shadow from + * hardware but all the other ones need to disable the feature. Targets + * that support the WHERE_AM_I opcode can use that instead + */ + if (adreno_gpu->base.hw_apriv) + gpu_write(gpu, REG_A6XX_CP_RB_CNTL, MSM_GPU_RB_CNTL_DEFAULT); + else + gpu_write(gpu, REG_A6XX_CP_RB_CNTL, + MSM_GPU_RB_CNTL_DEFAULT | AXXX_CP_RB_CNTL_NO_UPDATE); + + /* + * Expanded APRIV and targets that support WHERE_AM_I both need a + * privileged buffer to store the RPTR shadow + */ + + if (adreno_gpu->base.hw_apriv || a6xx_gpu->has_whereami) { + if (!a6xx_gpu->shadow_bo) { + a6xx_gpu->shadow = msm_gem_kernel_new_locked(gpu->dev, + sizeof(u32) * gpu->nr_rings, + MSM_BO_UNCACHED | MSM_BO_MAP_PRIV, + gpu->aspace, &a6xx_gpu->shadow_bo, + &a6xx_gpu->shadow_iova); + + if (IS_ERR(a6xx_gpu->shadow)) + return PTR_ERR(a6xx_gpu->shadow); + } + + gpu_write64(gpu, REG_A6XX_CP_RB_RPTR_ADDR_LO, + REG_A6XX_CP_RB_RPTR_ADDR_HI, + shadowptr(a6xx_gpu, gpu->rb[0])); + } /* Always come up on rb 0 */ a6xx_gpu->cur_ring = gpu->rb[0]; @@ -1033,6 +1098,11 @@ static void a6xx_destroy(struct msm_gpu *gpu) drm_gem_object_put(a6xx_gpu->sqe_bo); } + if (a6xx_gpu->shadow_bo) { + msm_gem_unpin_iova(a6xx_gpu->shadow_bo, gpu->aspace); + drm_gem_object_put(a6xx_gpu->shadow_bo); + } + a6xx_gmu_remove(a6xx_gpu); adreno_gpu_cleanup(adreno_gpu); @@ -1081,6 +1151,17 @@ a6xx_create_private_address_space(struct msm_gpu *gpu) "gpu", 0x100000000ULL, 0x1ffffffffULL); } +static uint32_t a6xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring) +{ + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); + + if (adreno_gpu->base.hw_apriv || a6xx_gpu->has_whereami) + return a6xx_gpu->shadow[ring->id]; + + return ring->memptrs->rptr = gpu_read(gpu, REG_A6XX_CP_RB_RPTR); +} + static const struct adreno_gpu_funcs funcs = { .base = { .get_param = adreno_get_param, @@ -1089,7 +1170,6 @@ static const struct adreno_gpu_funcs funcs = { .pm_resume = a6xx_pm_resume, .recover = a6xx_recover, .submit = a6xx_submit, - .flush = a6xx_flush, .active_ring = a6xx_active_ring, .irq = a6xx_irq, .destroy = a6xx_destroy, @@ -1105,6 +1185,7 @@ static const struct adreno_gpu_funcs funcs = { #endif .create_address_space = adreno_iommu_create_address_space, .create_private_address_space = a6xx_create_private_address_space, + .get_rptr = a6xx_get_rptr, }, .get_timestamp = a6xx_get_timestamp, }; diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h index da22d7549d9b..3eeebf6a754b 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h @@ -22,6 +22,12 @@ struct a6xx_gpu { struct msm_file_private *cur_ctx; struct a6xx_gmu gmu; + + struct drm_gem_object *shadow_bo; + uint64_t shadow_iova; + uint32_t *shadow; + + bool has_whereami; }; #define to_a6xx_gpu(x) container_of(x, struct a6xx_gpu, base) @@ -51,6 +57,9 @@ static inline bool a6xx_has_gbif(struct adreno_gpu *gpu) return true; } +#define shadowptr(_a6xx_gpu, _ring) ((_a6xx_gpu)->shadow_iova + \ + ((_ring)->id * sizeof(uint32_t))) + int a6xx_gmu_resume(struct a6xx_gpu *gpu); int a6xx_gmu_stop(struct a6xx_gpu *gpu); From patchwork Mon Sep 14 22:40:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jordan Crouse X-Patchwork-Id: 11775079 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2721E746 for ; Mon, 14 Sep 2020 22:40:54 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DB892206A5 for ; Mon, 14 Sep 2020 22:40:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=mg.codeaurora.org header.i=@mg.codeaurora.org header.b="NkAjyjDx" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DB892206A5 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C092D6E7EF; Mon, 14 Sep 2020 22:40:49 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail29.static.mailgun.info (mail29.static.mailgun.info [104.130.122.29]) by gabe.freedesktop.org (Postfix) with ESMTPS id 784A76E7F5 for ; Mon, 14 Sep 2020 22:40:48 +0000 (UTC) DKIM-Signature: a=rsa-sha256; v=1; c=relaxed/relaxed; d=mg.codeaurora.org; q=dns/txt; s=smtp; t=1600123248; h=Content-Transfer-Encoding: MIME-Version: References: In-Reply-To: Message-Id: Date: Subject: Cc: To: From: Sender; bh=/fCw5entm9M6pmDtEexVARYNYQ6eMP+0Az68QpI3EQ0=; b=NkAjyjDxyt6IEX7OmcfYwL7qEzvLOn5z0i8BMmLDO7U4mdesz/kLXN8AMK5OSKGkPY7VPPBX 0krGf34t/H4Fex4xy3yBl4mVL8UU5k5JQpCPz5IPMy/Y4a0Pe/ZDkKqImTN764XNOR5iCk9Q VcIbtrJ5JbuvnV4CEx2fk9jIpTQ= X-Mailgun-Sending-Ip: 104.130.122.29 X-Mailgun-Sid: WyJkOTU5ZSIsICJkcmktZGV2ZWxAbGlzdHMuZnJlZWRlc2t0b3Aub3JnIiwgImJlOWU0YSJd Received: from smtp.codeaurora.org (ec2-35-166-182-171.us-west-2.compute.amazonaws.com [35.166.182.171]) by smtp-out-n07.prod.us-east-1.postgun.com with SMTP id 5f5ff16d32925f96e186f4fb (version=TLS1.2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256); Mon, 14 Sep 2020 22:40:45 GMT Received: by smtp.codeaurora.org (Postfix, from userid 1001) id 30222C43395; Mon, 14 Sep 2020 22:40:44 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-caf-mail-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=ALL_TRUSTED, BAYES_00, SPF_FAIL autolearn=no autolearn_force=no version=3.4.0 Received: from jordan-laptop.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: jcrouse) by smtp.codeaurora.org (Postfix) with ESMTPSA id 7D705C433C8; Mon, 14 Sep 2020 22:40:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org 7D705C433C8 Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; spf=fail smtp.mailfrom=jcrouse@codeaurora.org From: Jordan Crouse To: linux-arm-msm@vger.kernel.org Subject: [PATCH 3/3] drm/msm: Get rid of the REG_ADRENO offsets Date: Mon, 14 Sep 2020 16:40:23 -0600 Message-Id: <20200914224023.1495082-4-jcrouse@codeaurora.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200914224023.1495082-1-jcrouse@codeaurora.org> References: <20200914224023.1495082-1-jcrouse@codeaurora.org> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Wambui Karuga , Jonathan Marek , David Airlie , freedreno@lists.freedesktop.org, Sharat Masetty , Akhil P Oommen , dri-devel@lists.freedesktop.org, Bjorn Andersson , Emil Velikov , Ben Dooks , AngeloGioacchino Del Regno , Sean Paul , linux-kernel@vger.kernel.org, Brian Masney Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" As newer GPU families are added it makes less sense to maintain a "generic" version functions for older families. Move adreno_submit() and get_rptr() into the target specific code for a2xx, a3xx and a4xx. Add a parameter to adreno_flush to pass the target specific WPTR register instead of relying on the generic register. All of this gets rid of the last of the REG_ADRENO offsets so remove all all the register definitions and infrastructure. Signed-off-by: Jordan Crouse --- drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 65 +++++++++++++++----- drivers/gpu/drm/msm/adreno/a3xx_gpu.c | 77 ++++++++++++++++++----- drivers/gpu/drm/msm/adreno/a4xx_gpu.c | 82 ++++++++++++++++++------- drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 12 ---- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 13 ---- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 82 +------------------------ drivers/gpu/drm/msm/adreno/adreno_gpu.h | 81 +----------------------- 7 files changed, 178 insertions(+), 234 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c index 48fa49f69d6d..7e82c41a85f1 100644 --- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c @@ -10,6 +10,48 @@ extern bool hang_debug; static void a2xx_dump(struct msm_gpu *gpu); static bool a2xx_idle(struct msm_gpu *gpu); +static void a2xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) +{ + struct msm_drm_private *priv = gpu->dev->dev_private; + struct msm_ringbuffer *ring = submit->ring; + unsigned int i; + + for (i = 0; i < submit->nr_cmds; i++) { + switch (submit->cmd[i].type) { + case MSM_SUBMIT_CMD_IB_TARGET_BUF: + /* ignore IB-targets */ + break; + case MSM_SUBMIT_CMD_CTX_RESTORE_BUF: + /* ignore if there has not been a ctx switch: */ + if (priv->lastctx == submit->queue->ctx) + break; + fallthrough; + case MSM_SUBMIT_CMD_BUF: + OUT_PKT3(ring, CP_INDIRECT_BUFFER_PFD, 2); + OUT_RING(ring, lower_32_bits(submit->cmd[i].iova)); + OUT_RING(ring, submit->cmd[i].size); + OUT_PKT2(ring); + break; + } + } + + OUT_PKT0(ring, REG_AXXX_CP_SCRATCH_REG2, 1); + OUT_RING(ring, submit->seqno); + + /* wait for idle before cache flush/interrupt */ + OUT_PKT3(ring, CP_WAIT_FOR_IDLE, 1); + OUT_RING(ring, 0x00000000); + + OUT_PKT3(ring, CP_EVENT_WRITE, 3); + OUT_RING(ring, CACHE_FLUSH_TS); + OUT_RING(ring, rbmemptr(ring, fence)); + OUT_RING(ring, submit->seqno); + OUT_PKT3(ring, CP_INTERRUPT, 1); + OUT_RING(ring, 0x80000000); + + adreno_flush(gpu, ring, REG_AXXX_CP_RB_WPTR); +} + static bool a2xx_me_init(struct msm_gpu *gpu) { struct msm_ringbuffer *ring = gpu->rb[0]; @@ -53,7 +95,7 @@ static bool a2xx_me_init(struct msm_gpu *gpu) OUT_PKT3(ring, CP_SET_PROTECTED_MODE, 1); OUT_RING(ring, 1); - gpu->funcs->flush(gpu, ring); + adreno_flush(gpu, ring, REG_AXXX_CP_RB_WPTR); return a2xx_idle(gpu); } @@ -421,16 +463,11 @@ a2xx_create_address_space(struct msm_gpu *gpu, struct platform_device *pdev) return aspace; } -/* Register offset defines for A2XX - copy of A3XX */ -static const unsigned int a2xx_register_offsets[REG_ADRENO_REGISTER_MAX] = { - REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_BASE, REG_AXXX_CP_RB_BASE), - REG_ADRENO_SKIP(REG_ADRENO_CP_RB_BASE_HI), - REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_RPTR_ADDR, REG_AXXX_CP_RB_RPTR_ADDR), - REG_ADRENO_SKIP(REG_ADRENO_CP_RB_RPTR_ADDR_HI), - REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_RPTR, REG_AXXX_CP_RB_RPTR), - REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_WPTR, REG_AXXX_CP_RB_WPTR), - REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_CNTL, REG_AXXX_CP_RB_CNTL), -}; +static u32 a2xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring) +{ + ring->memptrs->rptr = gpu_read(gpu, REG_AXXX_CP_RB_RPTR); + return ring->memptrs->rptr; +} static const struct adreno_gpu_funcs funcs = { .base = { @@ -439,8 +476,7 @@ static const struct adreno_gpu_funcs funcs = { .pm_suspend = msm_gpu_pm_suspend, .pm_resume = msm_gpu_pm_resume, .recover = a2xx_recover, - .submit = adreno_submit, - .flush = adreno_flush, + .submit = a2xx_submit, .active_ring = adreno_active_ring, .irq = a2xx_irq, .destroy = a2xx_destroy, @@ -450,6 +486,7 @@ static const struct adreno_gpu_funcs funcs = { .gpu_state_get = a2xx_gpu_state_get, .gpu_state_put = adreno_gpu_state_put, .create_address_space = a2xx_create_address_space, + .get_rptr = a2xx_get_rptr, }, }; @@ -491,8 +528,6 @@ struct msm_gpu *a2xx_gpu_init(struct drm_device *dev) else adreno_gpu->registers = a220_registers; - adreno_gpu->reg_offsets = a2xx_register_offsets; - ret = adreno_gpu_init(dev, pdev, adreno_gpu, &funcs, 1); if (ret) goto fail; diff --git a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c index f6471145a7a6..f29c77d9cd42 100644 --- a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c @@ -28,6 +28,61 @@ extern bool hang_debug; static void a3xx_dump(struct msm_gpu *gpu); static bool a3xx_idle(struct msm_gpu *gpu); +static void a3xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) +{ + struct msm_drm_private *priv = gpu->dev->dev_private; + struct msm_ringbuffer *ring = submit->ring; + unsigned int i; + + for (i = 0; i < submit->nr_cmds; i++) { + switch (submit->cmd[i].type) { + case MSM_SUBMIT_CMD_IB_TARGET_BUF: + /* ignore IB-targets */ + break; + case MSM_SUBMIT_CMD_CTX_RESTORE_BUF: + /* ignore if there has not been a ctx switch: */ + if (priv->lastctx == submit->queue->ctx) + break; + fallthrough; + case MSM_SUBMIT_CMD_BUF: + OUT_PKT3(ring, CP_INDIRECT_BUFFER_PFD, 2); + OUT_RING(ring, lower_32_bits(submit->cmd[i].iova)); + OUT_RING(ring, submit->cmd[i].size); + OUT_PKT2(ring); + break; + } + } + + OUT_PKT0(ring, REG_AXXX_CP_SCRATCH_REG2, 1); + OUT_RING(ring, submit->seqno); + + /* Flush HLSQ lazy updates to make sure there is nothing + * pending for indirect loads after the timestamp has + * passed: + */ + OUT_PKT3(ring, CP_EVENT_WRITE, 1); + OUT_RING(ring, HLSQ_FLUSH); + + /* wait for idle before cache flush/interrupt */ + OUT_PKT3(ring, CP_WAIT_FOR_IDLE, 1); + OUT_RING(ring, 0x00000000); + + /* BIT(31) of CACHE_FLUSH_TS triggers CACHE_FLUSH_TS IRQ from GPU */ + OUT_PKT3(ring, CP_EVENT_WRITE, 3); + OUT_RING(ring, CACHE_FLUSH_TS | BIT(31)); + OUT_RING(ring, rbmemptr(ring, fence)); + OUT_RING(ring, submit->seqno); + +#if 0 + /* Dummy set-constant to trigger context rollover */ + OUT_PKT3(ring, CP_SET_CONSTANT, 2); + OUT_RING(ring, CP_REG(REG_A3XX_HLSQ_CL_KERNEL_GROUP_X_REG)); + OUT_RING(ring, 0x00000000); +#endif + + adreno_flush(gpu, ring, REG_AXXX_CP_RB_WPTR); +} + static bool a3xx_me_init(struct msm_gpu *gpu) { struct msm_ringbuffer *ring = gpu->rb[0]; @@ -51,7 +106,7 @@ static bool a3xx_me_init(struct msm_gpu *gpu) OUT_RING(ring, 0x00000000); OUT_RING(ring, 0x00000000); - gpu->funcs->flush(gpu, ring); + adreno_flush(gpu, ring, REG_AXXX_CP_RB_WPTR); return a3xx_idle(gpu); } @@ -423,16 +478,11 @@ static struct msm_gpu_state *a3xx_gpu_state_get(struct msm_gpu *gpu) return state; } -/* Register offset defines for A3XX */ -static const unsigned int a3xx_register_offsets[REG_ADRENO_REGISTER_MAX] = { - REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_BASE, REG_AXXX_CP_RB_BASE), - REG_ADRENO_SKIP(REG_ADRENO_CP_RB_BASE_HI), - REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_RPTR_ADDR, REG_AXXX_CP_RB_RPTR_ADDR), - REG_ADRENO_SKIP(REG_ADRENO_CP_RB_RPTR_ADDR_HI), - REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_RPTR, REG_AXXX_CP_RB_RPTR), - REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_WPTR, REG_AXXX_CP_RB_WPTR), - REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_CNTL, REG_AXXX_CP_RB_CNTL), -}; +static u32 a3xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring) +{ + ring->memptrs->rptr = gpu_read(gpu, REG_AXXX_CP_RB_RPTR); + return ring->memptrs->rptr; +} static const struct adreno_gpu_funcs funcs = { .base = { @@ -441,8 +491,7 @@ static const struct adreno_gpu_funcs funcs = { .pm_suspend = msm_gpu_pm_suspend, .pm_resume = msm_gpu_pm_resume, .recover = a3xx_recover, - .submit = adreno_submit, - .flush = adreno_flush, + .submit = a3xx_submit, .active_ring = adreno_active_ring, .irq = a3xx_irq, .destroy = a3xx_destroy, @@ -452,6 +501,7 @@ static const struct adreno_gpu_funcs funcs = { .gpu_state_get = a3xx_gpu_state_get, .gpu_state_put = adreno_gpu_state_put, .create_address_space = adreno_iommu_create_address_space, + .get_rptr = a3xx_get_rptr, }, }; @@ -490,7 +540,6 @@ struct msm_gpu *a3xx_gpu_init(struct drm_device *dev) gpu->num_perfcntrs = ARRAY_SIZE(perfcntrs); adreno_gpu->registers = a3xx_registers; - adreno_gpu->reg_offsets = a3xx_register_offsets; ret = adreno_gpu_init(dev, pdev, adreno_gpu, &funcs, 1); if (ret) diff --git a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c index 954753600625..2b93b33b05e4 100644 --- a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c @@ -22,6 +22,54 @@ extern bool hang_debug; static void a4xx_dump(struct msm_gpu *gpu); static bool a4xx_idle(struct msm_gpu *gpu); +static void a4xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) +{ + struct msm_drm_private *priv = gpu->dev->dev_private; + struct msm_ringbuffer *ring = submit->ring; + unsigned int i; + + for (i = 0; i < submit->nr_cmds; i++) { + switch (submit->cmd[i].type) { + case MSM_SUBMIT_CMD_IB_TARGET_BUF: + /* ignore IB-targets */ + break; + case MSM_SUBMIT_CMD_CTX_RESTORE_BUF: + /* ignore if there has not been a ctx switch: */ + if (priv->lastctx == submit->queue->ctx) + break; + fallthrough; + case MSM_SUBMIT_CMD_BUF: + OUT_PKT3(ring, CP_INDIRECT_BUFFER_PFE, 2); + OUT_RING(ring, lower_32_bits(submit->cmd[i].iova)); + OUT_RING(ring, submit->cmd[i].size); + OUT_PKT2(ring); + break; + } + } + + OUT_PKT0(ring, REG_AXXX_CP_SCRATCH_REG2, 1); + OUT_RING(ring, submit->seqno); + + /* Flush HLSQ lazy updates to make sure there is nothing + * pending for indirect loads after the timestamp has + * passed: + */ + OUT_PKT3(ring, CP_EVENT_WRITE, 1); + OUT_RING(ring, HLSQ_FLUSH); + + /* wait for idle before cache flush/interrupt */ + OUT_PKT3(ring, CP_WAIT_FOR_IDLE, 1); + OUT_RING(ring, 0x00000000); + + /* BIT(31) of CACHE_FLUSH_TS triggers CACHE_FLUSH_TS IRQ from GPU */ + OUT_PKT3(ring, CP_EVENT_WRITE, 3); + OUT_RING(ring, CACHE_FLUSH_TS | BIT(31)); + OUT_RING(ring, rbmemptr(ring, fence)); + OUT_RING(ring, submit->seqno); + + adreno_flush(gpu, ring, REG_A4XX_CP_RB_WPTR); +} + /* * a4xx_enable_hwcg() - Program the clock control registers * @device: The adreno device pointer @@ -129,7 +177,7 @@ static bool a4xx_me_init(struct msm_gpu *gpu) OUT_RING(ring, 0x00000000); OUT_RING(ring, 0x00000000); - gpu->funcs->flush(gpu, ring); + adreno_flush(gpu, ring, REG_A4XX_CP_RB_WPTR); return a4xx_idle(gpu); } @@ -515,17 +563,6 @@ static struct msm_gpu_state *a4xx_gpu_state_get(struct msm_gpu *gpu) return state; } -/* Register offset defines for A4XX, in order of enum adreno_regs */ -static const unsigned int a4xx_register_offsets[REG_ADRENO_REGISTER_MAX] = { - REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_BASE, REG_A4XX_CP_RB_BASE), - REG_ADRENO_SKIP(REG_ADRENO_CP_RB_BASE_HI), - REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_RPTR_ADDR, REG_A4XX_CP_RB_RPTR_ADDR), - REG_ADRENO_SKIP(REG_ADRENO_CP_RB_RPTR_ADDR_HI), - REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_RPTR, REG_A4XX_CP_RB_RPTR), - REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_WPTR, REG_A4XX_CP_RB_WPTR), - REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_CNTL, REG_A4XX_CP_RB_CNTL), -}; - static void a4xx_dump(struct msm_gpu *gpu) { printk("status: %08x\n", @@ -576,6 +613,12 @@ static int a4xx_get_timestamp(struct msm_gpu *gpu, uint64_t *value) return 0; } +static u32 a4xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring) +{ + ring->memptrs->rptr = gpu_read(gpu, REG_A4XX_CP_RB_RPTR); + return ring->memptrs->rptr; +} + static const struct adreno_gpu_funcs funcs = { .base = { .get_param = adreno_get_param, @@ -583,8 +626,7 @@ static const struct adreno_gpu_funcs funcs = { .pm_suspend = a4xx_pm_suspend, .pm_resume = a4xx_pm_resume, .recover = a4xx_recover, - .submit = adreno_submit, - .flush = adreno_flush, + .submit = a4xx_submit, .active_ring = adreno_active_ring, .irq = a4xx_irq, .destroy = a4xx_destroy, @@ -594,6 +636,7 @@ static const struct adreno_gpu_funcs funcs = { .gpu_state_get = a4xx_gpu_state_get, .gpu_state_put = adreno_gpu_state_put, .create_address_space = adreno_iommu_create_address_space, + .get_rptr = a4xx_get_rptr, }, .get_timestamp = a4xx_get_timestamp, }; @@ -631,15 +674,12 @@ struct msm_gpu *a4xx_gpu_init(struct drm_device *dev) adreno_gpu->registers = adreno_is_a405(adreno_gpu) ? a405_registers : a4xx_registers; - adreno_gpu->reg_offsets = a4xx_register_offsets; /* if needed, allocate gmem: */ - if (adreno_is_a4xx(adreno_gpu)) { - ret = adreno_gpu_ocmem_init(dev->dev, adreno_gpu, - &a4xx_gpu->ocmem); - if (ret) - goto fail; - } + ret = adreno_gpu_ocmem_init(dev->dev, adreno_gpu, + &a4xx_gpu->ocmem); + if (ret) + goto fail; if (!gpu->aspace) { /* TODO we think it is possible to configure the GPU to diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c index 835aaef72b00..c941c8138f25 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c @@ -1121,17 +1121,6 @@ static irqreturn_t a5xx_irq(struct msm_gpu *gpu) return IRQ_HANDLED; } -static const u32 a5xx_register_offsets[REG_ADRENO_REGISTER_MAX] = { - REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_BASE, REG_A5XX_CP_RB_BASE), - REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_BASE_HI, REG_A5XX_CP_RB_BASE_HI), - REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_RPTR_ADDR, REG_A5XX_CP_RB_RPTR_ADDR), - REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_RPTR_ADDR_HI, - REG_A5XX_CP_RB_RPTR_ADDR_HI), - REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_RPTR, REG_A5XX_CP_RB_RPTR), - REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_WPTR, REG_A5XX_CP_RB_WPTR), - REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_CNTL, REG_A5XX_CP_RB_CNTL), -}; - static const u32 a5xx_registers[] = { 0x0000, 0x0002, 0x0004, 0x0020, 0x0022, 0x0026, 0x0029, 0x002B, 0x002E, 0x0035, 0x0038, 0x0042, 0x0044, 0x0044, 0x0047, 0x0095, @@ -1587,7 +1576,6 @@ struct msm_gpu *a5xx_gpu_init(struct drm_device *dev) gpu = &adreno_gpu->base; adreno_gpu->registers = a5xx_registers; - adreno_gpu->reg_offsets = a5xx_register_offsets; a5xx_gpu->lm_leakage = 0x4E001A; diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 9cce2b01b1a7..3248c89aa001 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -1022,18 +1022,6 @@ static irqreturn_t a6xx_irq(struct msm_gpu *gpu) return IRQ_HANDLED; } -static const u32 a6xx_register_offsets[REG_ADRENO_REGISTER_MAX] = { - REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_BASE, REG_A6XX_CP_RB_BASE), - REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_BASE_HI, REG_A6XX_CP_RB_BASE_HI), - REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_RPTR_ADDR, - REG_A6XX_CP_RB_RPTR_ADDR_LO), - REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_RPTR_ADDR_HI, - REG_A6XX_CP_RB_RPTR_ADDR_HI), - REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_RPTR, REG_A6XX_CP_RB_RPTR), - REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_WPTR, REG_A6XX_CP_RB_WPTR), - REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_CNTL, REG_A6XX_CP_RB_CNTL), -}; - static int a6xx_pm_resume(struct msm_gpu *gpu) { struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); @@ -1208,7 +1196,6 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev) gpu = &adreno_gpu->base; adreno_gpu->registers = NULL; - adreno_gpu->reg_offsets = a6xx_register_offsets; if (adreno_is_a650(adreno_gpu)) adreno_gpu->base.hw_apriv = true; diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index 11635e39ca19..fd8f491f2e48 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -424,11 +424,7 @@ static uint32_t get_rptr(struct adreno_gpu *adreno_gpu, { struct msm_gpu *gpu = &adreno_gpu->base; - if (gpu->funcs->get_rptr) - return gpu->funcs->get_rptr(gpu, ring); - - return ring->memptrs->rptr = adreno_gpu_read( - adreno_gpu, REG_ADRENO_CP_RB_RPTR); + return gpu->funcs->get_rptr(gpu, ring); } struct msm_ringbuffer *adreno_active_ring(struct msm_gpu *gpu) @@ -454,80 +450,8 @@ void adreno_recover(struct msm_gpu *gpu) } } -void adreno_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) +void adreno_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring, u32 reg) { - struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); - struct msm_drm_private *priv = gpu->dev->dev_private; - struct msm_ringbuffer *ring = submit->ring; - unsigned i; - - for (i = 0; i < submit->nr_cmds; i++) { - switch (submit->cmd[i].type) { - case MSM_SUBMIT_CMD_IB_TARGET_BUF: - /* ignore IB-targets */ - break; - case MSM_SUBMIT_CMD_CTX_RESTORE_BUF: - /* ignore if there has not been a ctx switch: */ - if (priv->lastctx == submit->queue->ctx) - break; - /* fall-thru */ - case MSM_SUBMIT_CMD_BUF: - OUT_PKT3(ring, adreno_is_a4xx(adreno_gpu) ? - CP_INDIRECT_BUFFER_PFE : CP_INDIRECT_BUFFER_PFD, 2); - OUT_RING(ring, lower_32_bits(submit->cmd[i].iova)); - OUT_RING(ring, submit->cmd[i].size); - OUT_PKT2(ring); - break; - } - } - - OUT_PKT0(ring, REG_AXXX_CP_SCRATCH_REG2, 1); - OUT_RING(ring, submit->seqno); - - if (adreno_is_a3xx(adreno_gpu) || adreno_is_a4xx(adreno_gpu)) { - /* Flush HLSQ lazy updates to make sure there is nothing - * pending for indirect loads after the timestamp has - * passed: - */ - OUT_PKT3(ring, CP_EVENT_WRITE, 1); - OUT_RING(ring, HLSQ_FLUSH); - } - - /* wait for idle before cache flush/interrupt */ - OUT_PKT3(ring, CP_WAIT_FOR_IDLE, 1); - OUT_RING(ring, 0x00000000); - - if (!adreno_is_a2xx(adreno_gpu)) { - /* BIT(31) of CACHE_FLUSH_TS triggers CACHE_FLUSH_TS IRQ from GPU */ - OUT_PKT3(ring, CP_EVENT_WRITE, 3); - OUT_RING(ring, CACHE_FLUSH_TS | BIT(31)); - OUT_RING(ring, rbmemptr(ring, fence)); - OUT_RING(ring, submit->seqno); - } else { - /* BIT(31) means something else on a2xx */ - OUT_PKT3(ring, CP_EVENT_WRITE, 3); - OUT_RING(ring, CACHE_FLUSH_TS); - OUT_RING(ring, rbmemptr(ring, fence)); - OUT_RING(ring, submit->seqno); - OUT_PKT3(ring, CP_INTERRUPT, 1); - OUT_RING(ring, 0x80000000); - } - -#if 0 - if (adreno_is_a3xx(adreno_gpu)) { - /* Dummy set-constant to trigger context rollover */ - OUT_PKT3(ring, CP_SET_CONSTANT, 2); - OUT_RING(ring, CP_REG(REG_A3XX_HLSQ_CL_KERNEL_GROUP_X_REG)); - OUT_RING(ring, 0x00000000); - } -#endif - - gpu->funcs->flush(gpu, ring); -} - -void adreno_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring) -{ - struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); uint32_t wptr; /* Copy the shadow to the actual register */ @@ -543,7 +467,7 @@ void adreno_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring) /* ensure writes to ringbuffer have hit system memory: */ mb(); - adreno_gpu_write(adreno_gpu, REG_ADRENO_CP_RB_WPTR, wptr); + gpu_write(gpu, reg, wptr); } bool adreno_idle(struct msm_gpu *gpu, struct msm_ringbuffer *ring) diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h index 848632758450..c3775f79525a 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h @@ -17,29 +17,8 @@ #include "adreno_common.xml.h" #include "adreno_pm4.xml.h" -#define REG_ADRENO_DEFINE(_offset, _reg) [_offset] = (_reg) + 1 -#define REG_SKIP ~0 -#define REG_ADRENO_SKIP(_offset) [_offset] = REG_SKIP - extern bool snapshot_debugbus; -/** - * adreno_regs: List of registers that are used in across all - * 3D devices. Each device type has different offset value for the same - * register, so an array of register offsets are declared for every device - * and are indexed by the enumeration values defined in this enum - */ -enum adreno_regs { - REG_ADRENO_CP_RB_BASE, - REG_ADRENO_CP_RB_BASE_HI, - REG_ADRENO_CP_RB_RPTR_ADDR, - REG_ADRENO_CP_RB_RPTR_ADDR_HI, - REG_ADRENO_CP_RB_RPTR, - REG_ADRENO_CP_RB_WPTR, - REG_ADRENO_CP_RB_CNTL, - REG_ADRENO_REGISTER_MAX, -}; - enum { ADRENO_FW_PM4 = 0, ADRENO_FW_SQE = 0, /* a6xx */ @@ -176,11 +155,6 @@ static inline bool adreno_is_a225(struct adreno_gpu *gpu) return gpu->revn == 225; } -static inline bool adreno_is_a3xx(struct adreno_gpu *gpu) -{ - return (gpu->revn >= 300) && (gpu->revn < 400); -} - static inline bool adreno_is_a305(struct adreno_gpu *gpu) { return gpu->revn == 305; @@ -207,11 +181,6 @@ static inline bool adreno_is_a330v2(struct adreno_gpu *gpu) return adreno_is_a330(gpu) && (gpu->rev.patchid > 0); } -static inline bool adreno_is_a4xx(struct adreno_gpu *gpu) -{ - return (gpu->revn >= 400) && (gpu->revn < 500); -} - static inline int adreno_is_a405(struct adreno_gpu *gpu) { return gpu->revn == 405; @@ -269,8 +238,7 @@ struct drm_gem_object *adreno_fw_create_bo(struct msm_gpu *gpu, const struct firmware *fw, u64 *iova); int adreno_hw_init(struct msm_gpu *gpu); void adreno_recover(struct msm_gpu *gpu); -void adreno_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit); -void adreno_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring); +void adreno_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring, u32 reg); bool adreno_idle(struct msm_gpu *gpu, struct msm_ringbuffer *ring); #if defined(CONFIG_DEBUG_FS) || defined(CONFIG_DEV_COREDUMP) void adreno_show(struct msm_gpu *gpu, struct msm_gpu_state *state, @@ -364,59 +332,12 @@ OUT_PKT7(struct msm_ringbuffer *ring, uint8_t opcode, uint16_t cnt) ((opcode & 0x7F) << 16) | (PM4_PARITY(opcode) << 23)); } -/* - * adreno_reg_check() - Checks the validity of a register enum - * @gpu: Pointer to struct adreno_gpu - * @offset_name: The register enum that is checked - */ -static inline bool adreno_reg_check(struct adreno_gpu *gpu, - enum adreno_regs offset_name) -{ - BUG_ON(offset_name >= REG_ADRENO_REGISTER_MAX || !gpu->reg_offsets[offset_name]); - - /* - * REG_SKIP is a special value that tell us that the register in - * question isn't implemented on target but don't trigger a BUG(). This - * is used to cleanly implement adreno_gpu_write64() and - * adreno_gpu_read64() in a generic fashion - */ - if (gpu->reg_offsets[offset_name] == REG_SKIP) - return false; - - return true; -} - -static inline u32 adreno_gpu_read(struct adreno_gpu *gpu, - enum adreno_regs offset_name) -{ - u32 reg = gpu->reg_offsets[offset_name]; - u32 val = 0; - if(adreno_reg_check(gpu,offset_name)) - val = gpu_read(&gpu->base, reg - 1); - return val; -} - -static inline void adreno_gpu_write(struct adreno_gpu *gpu, - enum adreno_regs offset_name, u32 data) -{ - u32 reg = gpu->reg_offsets[offset_name]; - if(adreno_reg_check(gpu, offset_name)) - gpu_write(&gpu->base, reg - 1, data); -} - struct msm_gpu *a2xx_gpu_init(struct drm_device *dev); struct msm_gpu *a3xx_gpu_init(struct drm_device *dev); struct msm_gpu *a4xx_gpu_init(struct drm_device *dev); struct msm_gpu *a5xx_gpu_init(struct drm_device *dev); struct msm_gpu *a6xx_gpu_init(struct drm_device *dev); -static inline void adreno_gpu_write64(struct adreno_gpu *gpu, - enum adreno_regs lo, enum adreno_regs hi, u64 data) -{ - adreno_gpu_write(gpu, lo, lower_32_bits(data)); - adreno_gpu_write(gpu, hi, upper_32_bits(data)); -} - static inline uint32_t get_wptr(struct msm_ringbuffer *ring) { return (ring->cur - ring->start) % (MSM_GPU_RINGBUFFER_SZ >> 2);