From patchwork Wed Mar 10 18:42:20 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pauli Nieminen X-Patchwork-Id: 84669 Received: from lists.sourceforge.net (lists.sourceforge.net [216.34.181.88]) by demeter.kernel.org (8.14.3/8.14.3) with ESMTP id o2AIhWPg003536 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Wed, 10 Mar 2010 18:44:08 GMT Received: from localhost ([127.0.0.1] helo=sfs-ml-1.v29.ch3.sourceforge.com) by sfs-ml-1.v29.ch3.sourceforge.com with esmtp (Exim 4.69) (envelope-from ) id 1NpQra-0002lK-6L; Wed, 10 Mar 2010 18:42:34 +0000 Received: from sfi-mx-2.v28.ch3.sourceforge.com ([172.29.28.122] helo=mx.sourceforge.net) by sfs-ml-1.v29.ch3.sourceforge.com with esmtp (Exim 4.69) (envelope-from ) id 1NpQrZ-0002lF-86 for dri-devel@lists.sourceforge.net; Wed, 10 Mar 2010 18:42:33 +0000 Received-SPF: neutral (sfi-mx-2.v28.ch3.sourceforge.com: 213.243.153.185 is neither permitted nor denied by domain of gmail.com) client-ip=213.243.153.185; envelope-from=suokkos@gmail.com; helo=filtteri2.pp.htv.fi; Received: from filtteri2.pp.htv.fi ([213.243.153.185]) by sfi-mx-2.v28.ch3.sourceforge.com with esmtp (Exim 4.69) id 1NpQrX-00087m-FG for dri-devel@lists.sourceforge.net; Wed, 10 Mar 2010 18:42:33 +0000 Received: from localhost (localhost [127.0.0.1]) by filtteri2.pp.htv.fi (Postfix) with ESMTP id 9D30821AF23; Wed, 10 Mar 2010 20:42:25 +0200 (EET) X-Virus-Scanned: Debian amavisd-new at pp.htv.fi Received: from smtp4.welho.com ([213.243.153.38]) by localhost (filtteri2.pp.htv.fi [213.243.153.185]) (amavisd-new, port 10024) with ESMTP id 4-WRT2-x9PzI; Wed, 10 Mar 2010 20:42:25 +0200 (EET) Received: from localhost.localdomain (cs181130083.pp.htv.fi [82.181.130.83]) by smtp4.welho.com (Postfix) with ESMTP id E08AC5BC012; Wed, 10 Mar 2010 20:42:24 +0200 (EET) From: Pauli Nieminen To: dri-devel@lists.sourceforge.net Subject: [PATCH 2/2] libdrm_radeon: Optimize cs_gem_reloc to do less looping. Date: Wed, 10 Mar 2010 20:42:20 +0200 Message-Id: <1268246540-16212-2-git-send-email-suokkos@gmail.com> X-Mailer: git-send-email 1.6.3.3 In-Reply-To: <1268246540-16212-1-git-send-email-suokkos@gmail.com> References: <1268246540-16212-1-git-send-email-suokkos@gmail.com> X-Spam-Score: 1.2 (+) X-Spam-Report: Spam Filtering performed by mx.sourceforge.net. See http://spamassassin.org/tag/ for more details. 1.2 SPF_NEUTRAL SPF: sender does not match SPF record (neutral) X-Headers-End: 1NpQrX-00087m-FG Cc: intel-gfx@lists.freedesktop.org X-BeenThere: dri-devel@lists.sourceforge.net X-Mailman-Version: 2.1.9 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: dri-devel-bounces@lists.sourceforge.net X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter.kernel.org [140.211.167.41]); Wed, 10 Mar 2010 18:44:08 +0000 (UTC) diff --git a/radeon/radeon_bo_gem.c b/radeon/radeon_bo_gem.c index bc8058d..fb51f4a 100644 --- a/radeon/radeon_bo_gem.c +++ b/radeon/radeon_bo_gem.c @@ -80,6 +80,7 @@ static struct radeon_bo *bo_open(struct radeon_bo_manager *bom, bo->base.domains = domains; bo->base.flags = flags; bo->base.ptr = NULL; + atomic_set(&bo->base.referenced_in_cs, 0); bo->map_count = 0; if (handle) { struct drm_gem_open open_arg; diff --git a/radeon/radeon_bo_int.h b/radeon/radeon_bo_int.h index 9589ead..9c0ae68 100644 --- a/radeon/radeon_bo_int.h +++ b/radeon/radeon_bo_int.h @@ -1,6 +1,8 @@ #ifndef RADEON_BO_INT #define RADEON_BO_INT +#include + struct radeon_bo_manager { struct radeon_bo_funcs *funcs; int fd; @@ -17,7 +19,7 @@ struct radeon_bo_int { unsigned cref; struct radeon_bo_manager *bom; uint32_t space_accounted; - uint32_t referenced_in_cs; + atomic_t referenced_in_cs; }; /* bo functions */ diff --git a/radeon/radeon_cs.c b/radeon/radeon_cs.c index cc9be39..d0e922b 100644 --- a/radeon/radeon_cs.c +++ b/radeon/radeon_cs.c @@ -88,3 +88,9 @@ void radeon_cs_space_set_flush(struct radeon_cs *cs, void (*fn)(void *), void *d csi->space_flush_fn = fn; csi->space_flush_data = data; } + +uint32_t radeon_cs_get_id(struct radeon_cs *cs) +{ + struct radeon_cs_int *csi = (struct radeon_cs_int *)cs; + return csi->id; +} diff --git a/radeon/radeon_cs.h b/radeon/radeon_cs.h index 49d5d9a..7f6ee68 100644 --- a/radeon/radeon_cs.h +++ b/radeon/radeon_cs.h @@ -85,7 +85,7 @@ extern int radeon_cs_write_reloc(struct radeon_cs *cs, uint32_t read_domain, uint32_t write_domain, uint32_t flags); - +extern uint32_t radeon_cs_get_id(struct radeon_cs *cs); /* * add a persistent BO to the list * a persistent BO is one that will be referenced across flushes, diff --git a/radeon/radeon_cs_gem.c b/radeon/radeon_cs_gem.c index 45a219c..ef5d3d5 100644 --- a/radeon/radeon_cs_gem.c +++ b/radeon/radeon_cs_gem.c @@ -32,6 +32,7 @@ #include #include #include +#include #include #include #include "radeon_cs.h" @@ -41,6 +42,7 @@ #include "radeon_bo_gem.h" #include "drm.h" #include "xf86drm.h" +#include "xf86atomic.h" #include "radeon_drm.h" struct radeon_cs_manager_gem { @@ -68,6 +70,50 @@ struct cs_gem { struct radeon_bo_int **relocs_bo; }; +static pthread_mutex_t id_mutex = PTHREAD_MUTEX_INITIALIZER; +static uint32_t cs_id_source = 0; + +/** + * result is undefined if called with ~0 + */ +static uint32_t get_first_zero(const uint32_t n) +{ + /* __builtin_ctz returns number of trailing zeros. */ + return 1 << __builtin_ctz(~n); +} + +/** + * Returns a free id for cs. + * If there is no free id we return zero + **/ +static uint32_t generate_id(void) +{ + uint32_t r = 0; + pthread_mutex_lock( &id_mutex ); + /* check for free ids */ + if (cs_id_source != ~r) { + /* find first zero bit */ + r = get_first_zero(cs_id_source); + + /* set id as reserved */ + cs_id_source |= r; + } + pthread_mutex_unlock( &id_mutex ); + return r; +} + +/** + * Free the id for later reuse + **/ +static void free_id(uint32_t id) +{ + pthread_mutex_lock( &id_mutex ); + + cs_id_source &= ~id; + + pthread_mutex_unlock( &id_mutex ); +} + static struct radeon_cs_int *cs_gem_create(struct radeon_cs_manager *csm, uint32_t ndw) { @@ -90,6 +136,7 @@ static struct radeon_cs_int *cs_gem_create(struct radeon_cs_manager *csm, } csg->base.relocs_total_size = 0; csg->base.crelocs = 0; + csg->base.id = generate_id(); csg->nrelocs = 4096 / (4 * 4) ; csg->relocs_bo = (struct radeon_bo_int**)calloc(1, csg->nrelocs*sizeof(void*)); @@ -141,38 +188,45 @@ static int cs_gem_write_reloc(struct radeon_cs_int *cs, if (write_domain == RADEON_GEM_DOMAIN_CPU) { return -EINVAL; } - /* check if bo is already referenced */ - for(i = 0; i < cs->crelocs; i++) { - idx = i * RELOC_SIZE; - reloc = (struct cs_reloc_gem*)&csg->relocs[idx]; - if (reloc->handle == bo->handle) { - /* Check domains must be in read or write. As we check already - * checked that in argument one of the read or write domain was - * set we only need to check that if previous reloc as the read - * domain set then the read_domain should also be set for this - * new relocation. - */ - /* the DDX expects to read and write from same pixmap */ - if (write_domain && (reloc->read_domain & write_domain)) { - reloc->read_domain = 0; - reloc->write_domain = write_domain; - } else if (read_domain & reloc->write_domain) { - reloc->read_domain = 0; - } else { - if (write_domain != reloc->write_domain) - return -EINVAL; - if (read_domain != reloc->read_domain) - return -EINVAL; + /* use bit field hash function to determine + if this bo is for sure not in this cs.*/ + if ((atomic_read(&boi->referenced_in_cs) & cs->id)) { + /* check if bo is already referenced. + * Scanning from end to begin reduces cycles with mesa because + * it often relocates same shared dma bo again. */ + for(i = cs->crelocs; i != 0;) { + --i; + idx = i * RELOC_SIZE; + reloc = (struct cs_reloc_gem*)&csg->relocs[idx]; + if (reloc->handle == bo->handle) { + /* Check domains must be in read or write. As we check already + * checked that in argument one of the read or write domain was + * set we only need to check that if previous reloc as the read + * domain set then the read_domain should also be set for this + * new relocation. + */ + /* the DDX expects to read and write from same pixmap */ + if (write_domain && (reloc->read_domain & write_domain)) { + reloc->read_domain = 0; + reloc->write_domain = write_domain; + } else if (read_domain & reloc->write_domain) { + reloc->read_domain = 0; + } else { + if (write_domain != reloc->write_domain) + return -EINVAL; + if (read_domain != reloc->read_domain) + return -EINVAL; + } + + reloc->read_domain |= read_domain; + reloc->write_domain |= write_domain; + /* update flags */ + reloc->flags |= (flags & reloc->flags); + /* write relocation packet */ + radeon_cs_write_dword((struct radeon_cs *)cs, 0xc0001000); + radeon_cs_write_dword((struct radeon_cs *)cs, idx); + return 0; } - - reloc->read_domain |= read_domain; - reloc->write_domain |= write_domain; - /* update flags */ - reloc->flags |= (flags & reloc->flags); - /* write relocation packet */ - radeon_cs_write_dword((struct radeon_cs *)cs, 0xc0001000); - radeon_cs_write_dword((struct radeon_cs *)cs, idx); - return 0; } } /* new relocation */ @@ -203,6 +257,8 @@ static int cs_gem_write_reloc(struct radeon_cs_int *cs, reloc->flags = flags; csg->chunks[1].length_dw += RELOC_SIZE; radeon_bo_ref(bo); + /* bo might be referenced from another context so have to use atomic opertions */ + atomic_add(&boi->referenced_in_cs, cs->id); cs->relocs_total_size += boi->size; radeon_cs_write_dword((struct radeon_cs *)cs, 0xc0001000); radeon_cs_write_dword((struct radeon_cs *)cs, idx); @@ -288,6 +344,8 @@ static int cs_gem_emit(struct radeon_cs_int *cs) &csg->cs, sizeof(struct drm_radeon_cs)); for (i = 0; i < csg->base.crelocs; i++) { csg->relocs_bo[i]->space_accounted = 0; + /* bo might be referenced from another context so have to use atomic opertions */ + atomic_dec(&csg->relocs_bo[i]->referenced_in_cs, cs->id); radeon_bo_unref((struct radeon_bo *)csg->relocs_bo[i]); csg->relocs_bo[i] = NULL; } @@ -302,6 +360,7 @@ static int cs_gem_destroy(struct radeon_cs_int *cs) { struct cs_gem *csg = (struct cs_gem*)cs; + free_id(cs->id); free(csg->relocs_bo); free(cs->relocs); free(cs->packets); @@ -317,6 +376,8 @@ static int cs_gem_erase(struct radeon_cs_int *cs) if (csg->relocs_bo) { for (i = 0; i < csg->base.crelocs; i++) { if (csg->relocs_bo[i]) { + /* bo might be referenced from another context so have to use atomic opertions */ + atomic_dec(&csg->relocs_bo[i]->referenced_in_cs, cs->id); radeon_bo_unref((struct radeon_bo *)csg->relocs_bo[i]); csg->relocs_bo[i] = NULL; } diff --git a/radeon/radeon_cs_int.h b/radeon/radeon_cs_int.h index 8ba76bf..6cee574 100644 --- a/radeon/radeon_cs_int.h +++ b/radeon/radeon_cs_int.h @@ -28,6 +28,7 @@ struct radeon_cs_int { int bo_count; void (*space_flush_fn)(void *); void *space_flush_data; + uint32_t id; }; /* cs functions */ diff --git a/xf86atomic.h b/xf86atomic.h index de8e220..854187a 100644 --- a/xf86atomic.h +++ b/xf86atomic.h @@ -50,6 +50,8 @@ typedef struct { # define atomic_set(x, val) ((x)->atomic = (val)) # define atomic_inc(x) ((void) __sync_fetch_and_add (&(x)->atomic, 1)) # define atomic_dec_and_test(x) (__sync_fetch_and_add (&(x)->atomic, -1) == 1) +# define atomic_add(x, v) ((void) __sync_add_and_fetch(&(x)->atomic, (v))) +# define atomic_dec(x, v) ((void) __sync_sub_and_fetch(&(x)->atomic, (v))) # define atomic_cmpxchg(x, oldv, newv) __sync_val_compare_and_swap (&(x)->atomic, oldv, newv) #endif @@ -66,6 +68,8 @@ typedef struct { # define atomic_read(x) AO_load_full(&(x)->atomic) # define atomic_set(x, val) AO_store_full(&(x)->atomic, (val)) # define atomic_inc(x) ((void) AO_fetch_and_add1_full(&(x)->atomic)) +# define atomic_add(x, v) ((void) AO_fetch_and_add_full(&(x)->atomic, (v))) +# define atomic_dec(x, v) ((void) AO_fetch_and_add_full(&(x)->atomic, -(v))) # define atomic_dec_and_test(x) (AO_fetch_and_sub1_full(&(x)->atomic) == 1) # define atomic_cmpxchg(x, oldv, newv) AO_compare_and_swap_full(&(x)->atomic, oldv, newv) @@ -82,6 +86,8 @@ typedef struct { uint_t atomic; } atomic_t; # define atomic_set(x, val) ((x)->atomic = (uint_t)(val)) # define atomic_inc(x) (atomic_inc_uint (&(x)->atomic)) # define atomic_dec_and_test(x) (atomic_dec_uint_nv(&(x)->atomic) == 1) +# define atomic_add(x, v) (atomic_add_uint(&(x)->atomic, (v))) +# define atomic_dec(x, v) (atomic_dec_uint(&(x)->atomic, (v))) # define atomic_cmpxchg(x, oldv, newv) atomic_cas_uint (&(x)->atomic, oldv, newv) #endif