From patchwork Thu May 17 17:23:56 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kieran Bingham X-Patchwork-Id: 10407375 X-Patchwork-Delegate: geert@linux-m68k.org Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 59B5860353 for ; Thu, 17 May 2018 17:24:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4A113285AF for ; Thu, 17 May 2018 17:24:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3EAA7285C5; Thu, 17 May 2018 17:24:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A8970285AF for ; Thu, 17 May 2018 17:24:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751788AbeEQRYL (ORCPT ); Thu, 17 May 2018 13:24:11 -0400 Received: from perceval.ideasonboard.com ([213.167.242.64]:50654 "EHLO perceval.ideasonboard.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752495AbeEQRYJ (ORCPT ); Thu, 17 May 2018 13:24:09 -0400 Received: from localhost.localdomain (cpc89242-aztw30-2-0-cust488.18-1.cable.virginm.net [86.31.129.233]) by perceval.ideasonboard.com (Postfix) with ESMTPSA id E9CEF1ABF; Thu, 17 May 2018 19:24:06 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com; s=mail; t=1526577847; bh=AWYcLLkPXruLZqhU08TXEoSj0d26xv3nzui0ZqAjA30=; h=From:To:Cc:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=oC9D137QtA0Eoe6Pxx1Qg2V9DiErXeJ0C4xJc3zwmm9xmJS3MZPyQljrmurmfuuaf Sl+f0hjBOs2jbgqL8wt7HncZ9Swjm6nGodCGBmRdOVMsOIeFagrYbxqHbksQi9vuwl HJTzUDkeX00bEUaTJyV5nx1Bco/KFqUf/xAeghVE= From: Kieran Bingham To: Laurent Pinchart Cc: linux-media@vger.kernel.org, linux-renesas-soc@vger.kernel.org, Kieran Bingham , Laurent Pinchart Subject: [PATCH v10 3/8] media: vsp1: Provide a body pool Date: Thu, 17 May 2018 18:23:56 +0100 Message-Id: <77dc679d839ff569ed31caebf8e67be711385d1e.1526577622.git-series.kieran.bingham+renesas@ideasonboard.com> X-Mailer: git-send-email 2.17.0 In-Reply-To: References: In-Reply-To: References: Sender: linux-renesas-soc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-renesas-soc@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Each display list allocates a body to store register values in a dma accessible buffer from a dma_alloc_wc() allocation. Each of these results in an entry in the IOMMU TLB, and a large number of display list allocations adds pressure to this resource. Reduce TLB pressure on the IPMMUs by allocating multiple display list bodies in a single allocation, and providing these to the display list through a 'body pool'. A pool can be allocated by the display list manager or entities which require their own body allocations. Signed-off-by: Kieran Bingham Reviewed-by: Laurent Pinchart Signed-off-by: Laurent Pinchart --- drivers/media/platform/vsp1/vsp1_dl.c | 163 +++++++++++++++++++++++++++- drivers/media/platform/vsp1/vsp1_dl.h | 8 +- 2 files changed, 171 insertions(+) diff --git a/drivers/media/platform/vsp1/vsp1_dl.c b/drivers/media/platform/vsp1/vsp1_dl.c index 51965c30dec2..41ace89a585b 100644 --- a/drivers/media/platform/vsp1/vsp1_dl.c +++ b/drivers/media/platform/vsp1/vsp1_dl.c @@ -41,6 +41,8 @@ struct vsp1_dl_entry { /** * struct vsp1_dl_body - Display list body * @list: entry in the display list list of bodies + * @free: entry in the pool free body list + * @pool: pool to which this body belongs * @vsp1: the VSP1 device * @entries: array of entries * @dma: DMA address of the entries @@ -50,6 +52,9 @@ struct vsp1_dl_entry { */ struct vsp1_dl_body { struct list_head list; + struct list_head free; + + struct vsp1_dl_body_pool *pool; struct vsp1_device *vsp1; struct vsp1_dl_entry *entries; @@ -61,6 +66,30 @@ struct vsp1_dl_body { }; /** + * struct vsp1_dl_body_pool - display list body pool + * @dma: DMA address of the entries + * @size: size of the full DMA memory pool in bytes + * @mem: CPU memory pointer for the pool + * @bodies: Array of DLB structures for the pool + * @free: List of free DLB entries + * @lock: Protects the free list + * @vsp1: the VSP1 device + */ +struct vsp1_dl_body_pool { + /* DMA allocation */ + dma_addr_t dma; + size_t size; + void *mem; + + /* Body management */ + struct vsp1_dl_body *bodies; + struct list_head free; + spinlock_t lock; + + struct vsp1_device *vsp1; +}; + +/** * struct vsp1_dl_list - Display list * @list: entry in the display list manager lists * @dlm: the display list manager @@ -104,6 +133,7 @@ enum vsp1_dl_mode { * @active: list currently being processed (loaded) by hardware * @queued: list queued to the hardware (written to the DL registers) * @pending: list waiting to be queued to the hardware + * @pool: body pool for the display list bodies * @gc_work: bodies garbage collector work struct * @gc_bodies: array of display list bodies waiting to be freed */ @@ -119,6 +149,8 @@ struct vsp1_dl_manager { struct vsp1_dl_list *queued; struct vsp1_dl_list *pending; + struct vsp1_dl_body_pool *pool; + struct work_struct gc_work; struct list_head gc_bodies; }; @@ -127,6 +159,137 @@ struct vsp1_dl_manager { * Display List Body Management */ +/** + * vsp1_dl_body_pool_create - Create a pool of bodies from a single allocation + * @vsp1: The VSP1 device + * @num_bodies: The number of bodies to allocate + * @num_entries: The maximum number of entries that a body can contain + * @extra_size: Extra allocation provided for the bodies + * + * Allocate a pool of display list bodies each with enough memory to contain the + * requested number of entries plus the @extra_size. + * + * Return a pointer to a pool on success or NULL if memory can't be allocated. + */ +struct vsp1_dl_body_pool * +vsp1_dl_body_pool_create(struct vsp1_device *vsp1, unsigned int num_bodies, + unsigned int num_entries, size_t extra_size) +{ + struct vsp1_dl_body_pool *pool; + size_t dlb_size; + unsigned int i; + + pool = kzalloc(sizeof(*pool), GFP_KERNEL); + if (!pool) + return NULL; + + pool->vsp1 = vsp1; + + /* + * TODO: 'extra_size' is only used by vsp1_dlm_create(), to allocate + * extra memory for the display list header. We need only one header per + * display list, not per display list body, thus this allocation is + * extraneous and should be reworked in the future. + */ + dlb_size = num_entries * sizeof(struct vsp1_dl_entry) + extra_size; + pool->size = dlb_size * num_bodies; + + pool->bodies = kcalloc(num_bodies, sizeof(*pool->bodies), GFP_KERNEL); + if (!pool->bodies) { + kfree(pool); + return NULL; + } + + pool->mem = dma_alloc_wc(vsp1->bus_master, pool->size, &pool->dma, + GFP_KERNEL); + if (!pool->mem) { + kfree(pool->bodies); + kfree(pool); + return NULL; + } + + spin_lock_init(&pool->lock); + INIT_LIST_HEAD(&pool->free); + + for (i = 0; i < num_bodies; ++i) { + struct vsp1_dl_body *dlb = &pool->bodies[i]; + + dlb->pool = pool; + dlb->max_entries = num_entries; + + dlb->dma = pool->dma + i * dlb_size; + dlb->entries = pool->mem + i * dlb_size; + + list_add_tail(&dlb->free, &pool->free); + } + + return pool; +} + +/** + * vsp1_dl_body_pool_destroy - Release a body pool + * @pool: The body pool + * + * Release all components of a pool allocation. + */ +void vsp1_dl_body_pool_destroy(struct vsp1_dl_body_pool *pool) +{ + if (!pool) + return; + + if (pool->mem) + dma_free_wc(pool->vsp1->bus_master, pool->size, pool->mem, + pool->dma); + + kfree(pool->bodies); + kfree(pool); +} + +/** + * vsp1_dl_body_get - Obtain a body from a pool + * @pool: The body pool + * + * Obtain a body from the pool without blocking. + * + * Returns a display list body or NULL if there are none available. + */ +struct vsp1_dl_body *vsp1_dl_body_get(struct vsp1_dl_body_pool *pool) +{ + struct vsp1_dl_body *dlb = NULL; + unsigned long flags; + + spin_lock_irqsave(&pool->lock, flags); + + if (!list_empty(&pool->free)) { + dlb = list_first_entry(&pool->free, struct vsp1_dl_body, free); + list_del(&dlb->free); + } + + spin_unlock_irqrestore(&pool->lock, flags); + + return dlb; +} + +/** + * vsp1_dl_body_put - Return a body back to its pool + * @dlb: The display list body + * + * Return a body back to the pool, and reset the num_entries to clear the list. + */ +void vsp1_dl_body_put(struct vsp1_dl_body *dlb) +{ + unsigned long flags; + + if (!dlb) + return; + + dlb->num_entries = 0; + + spin_lock_irqsave(&dlb->pool->lock, flags); + list_add_tail(&dlb->free, &dlb->pool->free); + spin_unlock_irqrestore(&dlb->pool->lock, flags); +} + /* * Initialize a display list body object and allocate DMA memory for the body * data. The display list body object is expected to have been initialized to diff --git a/drivers/media/platform/vsp1/vsp1_dl.h b/drivers/media/platform/vsp1/vsp1_dl.h index 57565debe132..107eebcdbab6 100644 --- a/drivers/media/platform/vsp1/vsp1_dl.h +++ b/drivers/media/platform/vsp1/vsp1_dl.h @@ -13,6 +13,7 @@ struct vsp1_device; struct vsp1_dl_body; +struct vsp1_dl_body_pool; struct vsp1_dl_list; struct vsp1_dl_manager; @@ -33,6 +34,13 @@ void vsp1_dl_list_put(struct vsp1_dl_list *dl); void vsp1_dl_list_write(struct vsp1_dl_list *dl, u32 reg, u32 data); void vsp1_dl_list_commit(struct vsp1_dl_list *dl, bool internal); +struct vsp1_dl_body_pool * +vsp1_dl_body_pool_create(struct vsp1_device *vsp1, unsigned int num_bodies, + unsigned int num_entries, size_t extra_size); +void vsp1_dl_body_pool_destroy(struct vsp1_dl_body_pool *pool); +struct vsp1_dl_body *vsp1_dl_body_get(struct vsp1_dl_body_pool *pool); +void vsp1_dl_body_put(struct vsp1_dl_body *dlb); + struct vsp1_dl_body *vsp1_dl_body_alloc(struct vsp1_device *vsp1, unsigned int num_entries); void vsp1_dl_body_free(struct vsp1_dl_body *dlb);