From patchwork Thu Sep 25 10:39:38 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 4974091 Return-Path: X-Original-To: patchwork-dri-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id CAB189F32B for ; Thu, 25 Sep 2014 10:39:45 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 2F1C7202A1 for ; Thu, 25 Sep 2014 10:39:44 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 5669A20279 for ; Thu, 25 Sep 2014 10:39:42 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id DC0366E2E6; Thu, 25 Sep 2014 03:39:40 -0700 (PDT) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from youngberry.canonical.com (youngberry.canonical.com [91.189.89.112]) by gabe.freedesktop.org (Postfix) with ESMTP id B91E46E2E6 for ; Thu, 25 Sep 2014 03:39:39 -0700 (PDT) Received: from 5ed49945.cm-7-5c.dynamic.ziggo.nl ([94.212.153.69] helo=[192.168.1.128]) by youngberry.canonical.com with esmtpsa (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1XX6SY-0005sF-JW; Thu, 25 Sep 2014 10:39:38 +0000 Message-ID: <5423F0EA.4030708@canonical.com> Date: Thu, 25 Sep 2014 12:39:38 +0200 From: Maarten Lankhorst User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.1.2 MIME-Version: 1.0 To: =?windows-1252?Q?Christian_K=F6nig?= , dri-devel@lists.freedesktop.org Subject: [PATCH v2 6/8] drm/radeon: cope with foreign fences inside the reservation object References: <1410957305-10190-1-git-send-email-maarten.lankhorst@canonical.com> <1410957305-10190-7-git-send-email-maarten.lankhorst@canonical.com> <54198803.2040508@vodafone.de> In-Reply-To: <54198803.2040508@vodafone.de> Cc: =?windows-1252?Q?Michel_D=E4nzer?= X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Spam-Status: No, score=-4.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Not the whole world is a radeon! :-) Signed-off-by: Maarten Lankhorst --- Changes: - Removed interruptible parameter, only 1 place has a use for it, and it's the only place that can hit it. - Fail faster in radeon_semaphore_sync_resv. - Make the break on error in radeon_cs.c more explicit. - Upgrade the unlikely() to a WARN_ON_ONCE() in radeon_fence_wait, with a comment explaining why. diff --git a/drivers/gpu/drm/radeon/cik.c b/drivers/gpu/drm/radeon/cik.c index 0d761f73a7fa..7bdf80c2603d 100644 --- a/drivers/gpu/drm/radeon/cik.c +++ b/drivers/gpu/drm/radeon/cik.c @@ -3993,7 +3993,7 @@ struct radeon_fence *cik_copy_cpdma(struct radeon_device *rdev, return ERR_PTR(r); } - radeon_semaphore_sync_resv(sem, resv, false); + radeon_semaphore_sync_resv(rdev, sem, resv, false); radeon_semaphore_sync_rings(rdev, sem, ring->idx); for (i = 0; i < num_loops; i++) { diff --git a/drivers/gpu/drm/radeon/cik_sdma.c b/drivers/gpu/drm/radeon/cik_sdma.c index c01a6100c318..c473c9125295 100644 --- a/drivers/gpu/drm/radeon/cik_sdma.c +++ b/drivers/gpu/drm/radeon/cik_sdma.c @@ -571,7 +571,7 @@ struct radeon_fence *cik_copy_dma(struct radeon_device *rdev, return ERR_PTR(r); } - radeon_semaphore_sync_resv(sem, resv, false); + radeon_semaphore_sync_resv(rdev, sem, resv, false); radeon_semaphore_sync_rings(rdev, sem, ring->idx); for (i = 0; i < num_loops; i++) { diff --git a/drivers/gpu/drm/radeon/evergreen_dma.c b/drivers/gpu/drm/radeon/evergreen_dma.c index 946f37d0b469..66bcfadeedd1 100644 --- a/drivers/gpu/drm/radeon/evergreen_dma.c +++ b/drivers/gpu/drm/radeon/evergreen_dma.c @@ -133,7 +133,7 @@ struct radeon_fence *evergreen_copy_dma(struct radeon_device *rdev, return ERR_PTR(r); } - radeon_semaphore_sync_resv(sem, resv, false); + radeon_semaphore_sync_resv(rdev, sem, resv, false); radeon_semaphore_sync_rings(rdev, sem, ring->idx); for (i = 0; i < num_loops; i++) { diff --git a/drivers/gpu/drm/radeon/r600.c b/drivers/gpu/drm/radeon/r600.c index 25f367ac4637..f8eb519c3286 100644 --- a/drivers/gpu/drm/radeon/r600.c +++ b/drivers/gpu/drm/radeon/r600.c @@ -2912,7 +2912,7 @@ struct radeon_fence *r600_copy_cpdma(struct radeon_device *rdev, return ERR_PTR(r); } - radeon_semaphore_sync_resv(sem, resv, false); + radeon_semaphore_sync_resv(rdev, sem, resv, false); radeon_semaphore_sync_rings(rdev, sem, ring->idx); radeon_ring_write(ring, PACKET3(PACKET3_SET_CONFIG_REG, 1)); diff --git a/drivers/gpu/drm/radeon/r600_dma.c b/drivers/gpu/drm/radeon/r600_dma.c index fc54224ce87b..a49db830a47f 100644 --- a/drivers/gpu/drm/radeon/r600_dma.c +++ b/drivers/gpu/drm/radeon/r600_dma.c @@ -470,7 +470,7 @@ struct radeon_fence *r600_copy_dma(struct radeon_device *rdev, return ERR_PTR(r); } - radeon_semaphore_sync_resv(sem, resv, false); + radeon_semaphore_sync_resv(rdev, sem, resv, false); radeon_semaphore_sync_rings(rdev, sem, ring->idx); for (i = 0; i < num_loops; i++) { diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h index 864457cd7c98..07aa961bf5ca 100644 --- a/drivers/gpu/drm/radeon/radeon.h +++ b/drivers/gpu/drm/radeon/radeon.h @@ -589,9 +589,10 @@ bool radeon_semaphore_emit_wait(struct radeon_device *rdev, int ring, struct radeon_semaphore *semaphore); void radeon_semaphore_sync_fence(struct radeon_semaphore *semaphore, struct radeon_fence *fence); -void radeon_semaphore_sync_resv(struct radeon_semaphore *semaphore, - struct reservation_object *resv, - bool shared); +int radeon_semaphore_sync_resv(struct radeon_device *rdev, + struct radeon_semaphore *semaphore, + struct reservation_object *resv, + bool shared); int radeon_semaphore_sync_rings(struct radeon_device *rdev, struct radeon_semaphore *semaphore, int waiting_ring); diff --git a/drivers/gpu/drm/radeon/radeon_cs.c b/drivers/gpu/drm/radeon/radeon_cs.c index f662de41ba49..1c893447d7cd 100644 --- a/drivers/gpu/drm/radeon/radeon_cs.c +++ b/drivers/gpu/drm/radeon/radeon_cs.c @@ -249,9 +249,9 @@ static int radeon_cs_get_ring(struct radeon_cs_parser *p, u32 ring, s32 priority return 0; } -static void radeon_cs_sync_rings(struct radeon_cs_parser *p) +static int radeon_cs_sync_rings(struct radeon_cs_parser *p) { - int i; + int i, r = 0; for (i = 0; i < p->nrelocs; i++) { struct reservation_object *resv; @@ -260,9 +260,13 @@ static void radeon_cs_sync_rings(struct radeon_cs_parser *p) continue; resv = p->relocs[i].robj->tbo.resv; - radeon_semaphore_sync_resv(p->ib.semaphore, resv, - p->relocs[i].tv.shared); + r = radeon_semaphore_sync_resv(p->rdev, p->ib.semaphore, resv, + p->relocs[i].tv.shared); + + if (r) + break; } + return r; } /* XXX: note that this is called from the legacy UMS CS ioctl as well */ @@ -472,13 +476,19 @@ static int radeon_cs_ib_chunk(struct radeon_device *rdev, return r; } + r = radeon_cs_sync_rings(parser); + if (r) { + if (r != -ERESTARTSYS) + DRM_ERROR("Failed to sync rings: %i\n", r); + return r; + } + if (parser->ring == R600_RING_TYPE_UVD_INDEX) radeon_uvd_note_usage(rdev); else if ((parser->ring == TN_RING_TYPE_VCE1_INDEX) || (parser->ring == TN_RING_TYPE_VCE2_INDEX)) radeon_vce_note_usage(rdev); - radeon_cs_sync_rings(parser); r = radeon_ib_schedule(rdev, &parser->ib, NULL, true); if (r) { DRM_ERROR("Failed to schedule IB !\n"); @@ -565,7 +575,13 @@ static int radeon_cs_ib_vm_chunk(struct radeon_device *rdev, if (r) { goto out; } - radeon_cs_sync_rings(parser); + + r = radeon_cs_sync_rings(parser); + if (r) { + if (r != -ERESTARTSYS) + DRM_ERROR("Failed to sync rings: %i\n", r); + goto out; + } radeon_semaphore_sync_fence(parser->ib.semaphore, vm->fence); if ((rdev->family >= CHIP_TAHITI) && diff --git a/drivers/gpu/drm/radeon/radeon_fence.c b/drivers/gpu/drm/radeon/radeon_fence.c index af9f2d6bd7d0..995167025282 100644 --- a/drivers/gpu/drm/radeon/radeon_fence.c +++ b/drivers/gpu/drm/radeon/radeon_fence.c @@ -541,6 +541,15 @@ int radeon_fence_wait(struct radeon_fence *fence, bool intr) uint64_t seq[RADEON_NUM_RINGS] = {}; long r; + /* + * This function should not be called on !radeon fences. + * If this is the case, it would mean this function can + * also be called on radeon fences belonging to another card. + * exclusive_lock is not held in that case. + */ + if (WARN_ON_ONCE(!to_radeon_fence(&fence->base))) + return fence_wait(&fence->base, intr); + seq[fence->ring] = fence->seq; r = radeon_fence_wait_seq_timeout(fence->rdev, seq, intr, MAX_SCHEDULE_TIMEOUT); if (r < 0) { diff --git a/drivers/gpu/drm/radeon/radeon_semaphore.c b/drivers/gpu/drm/radeon/radeon_semaphore.c index 4d4b0773638a..6deb08f045b7 100644 --- a/drivers/gpu/drm/radeon/radeon_semaphore.c +++ b/drivers/gpu/drm/radeon/radeon_semaphore.c @@ -124,27 +124,42 @@ void radeon_semaphore_sync_fence(struct radeon_semaphore *semaphore, * * Sync to the fence using this semaphore object */ -void radeon_semaphore_sync_resv(struct radeon_semaphore *sema, - struct reservation_object *resv, - bool shared) +int radeon_semaphore_sync_resv(struct radeon_device *rdev, + struct radeon_semaphore *sema, + struct reservation_object *resv, + bool shared) { struct reservation_object_list *flist; struct fence *f; + struct radeon_fence *fence; unsigned i; + int r = 0; /* always sync to the exclusive fence */ f = reservation_object_get_excl(resv); - radeon_semaphore_sync_fence(sema, (struct radeon_fence*)f); + fence = f ? to_radeon_fence(f) : NULL; + if (fence && fence->rdev == rdev) + radeon_semaphore_sync_fence(sema, fence); + else if (f) + r = fence_wait(f, true); flist = reservation_object_get_list(resv); - if (shared || !flist) - return; + if (shared || !flist || r) + return r; for (i = 0; i < flist->shared_count; ++i) { f = rcu_dereference_protected(flist->shared[i], reservation_object_held(resv)); - radeon_semaphore_sync_fence(sema, (struct radeon_fence*)f); + fence = to_radeon_fence(f); + if (fence && fence->rdev == rdev) + radeon_semaphore_sync_fence(sema, fence); + else + r = fence_wait(f, true); + + if (r) + break; } + return r; } /** diff --git a/drivers/gpu/drm/radeon/radeon_vm.c b/drivers/gpu/drm/radeon/radeon_vm.c index ce870959dff8..8af1a94e7448 100644 --- a/drivers/gpu/drm/radeon/radeon_vm.c +++ b/drivers/gpu/drm/radeon/radeon_vm.c @@ -698,7 +698,7 @@ int radeon_vm_update_page_directory(struct radeon_device *rdev, if (ib.length_dw != 0) { radeon_asic_vm_pad_ib(rdev, &ib); - radeon_semaphore_sync_resv(ib.semaphore, pd->tbo.resv, false); + radeon_semaphore_sync_resv(rdev, ib.semaphore, pd->tbo.resv, false); radeon_semaphore_sync_fence(ib.semaphore, vm->last_id_use); WARN_ON(ib.length_dw > ndw); r = radeon_ib_schedule(rdev, &ib, NULL, false); @@ -825,7 +825,7 @@ static void radeon_vm_update_ptes(struct radeon_device *rdev, unsigned nptes; uint64_t pte; - radeon_semaphore_sync_resv(ib->semaphore, pt->tbo.resv, false); + radeon_semaphore_sync_resv(rdev, ib->semaphore, pt->tbo.resv, false); if ((addr & ~mask) == (end & ~mask)) nptes = end - addr; diff --git a/drivers/gpu/drm/radeon/rv770_dma.c b/drivers/gpu/drm/radeon/rv770_dma.c index c112764adfdf..7f34bad2e724 100644 --- a/drivers/gpu/drm/radeon/rv770_dma.c +++ b/drivers/gpu/drm/radeon/rv770_dma.c @@ -67,7 +67,7 @@ struct radeon_fence *rv770_copy_dma(struct radeon_device *rdev, return ERR_PTR(r); } - radeon_semaphore_sync_resv(sem, resv, false); + radeon_semaphore_sync_resv(rdev, sem, resv, false); radeon_semaphore_sync_rings(rdev, sem, ring->idx); for (i = 0; i < num_loops; i++) { diff --git a/drivers/gpu/drm/radeon/si_dma.c b/drivers/gpu/drm/radeon/si_dma.c index 9b0dfbc913f3..b58f12b762d7 100644 --- a/drivers/gpu/drm/radeon/si_dma.c +++ b/drivers/gpu/drm/radeon/si_dma.c @@ -252,7 +252,7 @@ struct radeon_fence *si_copy_dma(struct radeon_device *rdev, return ERR_PTR(r); } - radeon_semaphore_sync_resv(sem, resv, false); + radeon_semaphore_sync_resv(rdev, sem, resv, false); radeon_semaphore_sync_rings(rdev, sem, ring->idx); for (i = 0; i < num_loops; i++) {