From patchwork Thu Aug 13 05:43:07 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Hellstrom X-Patchwork-Id: 7006201 Return-Path: X-Original-To: patchwork-dri-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 5D5149F7B4 for ; Thu, 13 Aug 2015 06:04:04 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 4FFC2206C0 for ; Thu, 13 Aug 2015 06:04:03 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 237692069D for ; Thu, 13 Aug 2015 06:04:02 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 493CA6EC00; Wed, 12 Aug 2015 23:03:51 -0700 (PDT) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mx2.bahnhof.se (mx2.bahnhof.se [213.80.101.12]) by gabe.freedesktop.org (Postfix) with ESMTPS id 495066EBEC for ; Wed, 12 Aug 2015 23:03:46 -0700 (PDT) Received: from localhost (mf.bahnhof.se [213.80.101.20]) by mx2-reinject (Postfix) with ESMTP id 04A6840B77 for ; Thu, 13 Aug 2015 08:03:44 +0200 (CEST) X-Virus-Scanned: by amavisd-new using ClamAV at bahnhof.se (MF3) X-Spam-Score: 1.74 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 Received: from mf3.bahnhof.se ([127.0.0.1]) by localhost (mf3.bahnhof.se [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 0kUxZ5kOUkTV for ; Thu, 13 Aug 2015 08:03:36 +0200 (CEST) Received: from mail.shipmail.org (h-5-150-198-217.na.cust.bahnhof.se [5.150.198.217]) by mf3.bahnhof.se (Postfix) with ESMTP id 71B733E8D29 for ; Thu, 13 Aug 2015 08:03:05 +0200 (CEST) Received: from mail.shipmail.org (lin0.kontor.shipmail.org [127.0.0.1]) by mail.shipmail.org (Postfix) with ESMTP id DCBE03DC001 for ; Thu, 13 Aug 2015 07:43:25 +0200 (CEST) Received: from lin0.kontor.shipmail.org [127.0.0.1] by BitDefender SMTP Proxy on lin0.kontor.shipmail.org [127.0.0.1] for lin0.kontor.shipmail.org [127.0.0.1]; Thu, 13 Aug 2015 07:43:25 +0200 (CEST) Received: from localhost.localdomain (lin0.kontor.shipmail.org [127.0.0.1]) by mail.shipmail.org (Postfix) with ESMTP id C25453F4003 for ; Thu, 13 Aug 2015 07:43:25 +0200 (CEST) From: Thomas Hellstrom To: dri-devel@lists.freedesktop.org Subject: [PATCH 18/28] drm/vmwgfx: Avoid cmdbuf alloc sleeping if !TASK_RUNNING Date: Wed, 12 Aug 2015 22:43:07 -0700 Message-Id: <1439444597-4664-19-git-send-email-thellstrom@vmware.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1439444597-4664-1-git-send-email-thellstrom@vmware.com> References: <1439444597-4664-1-git-send-email-thellstrom@vmware.com> X-BitDefender-Scanner: Mail not scanned due to license constraints X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP If the command buffer pool is out of space, the code waits until space is available. However since the condition code tries to allocate a range manager node while !TASK_RUNNING we get a kernel warning. Avoid this by pre-allocating the mm node. This will also probably be more efficient. Signed-off-by: Thomas Hellstrom Reviewed-by: Sinclair Yeh --- drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c | 83 ++++++++++++++-------------------- 1 file changed, 34 insertions(+), 49 deletions(-) diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c b/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c index b044bf5..e94feb3 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c @@ -33,7 +33,8 @@ * multiple of the DMA pool allocation size. */ #define VMW_CMDBUF_INLINE_ALIGN 64 -#define VMW_CMDBUF_INLINE_SIZE (1024 - VMW_CMDBUF_INLINE_ALIGN) +#define VMW_CMDBUF_INLINE_SIZE \ + (1024 - ALIGN(sizeof(SVGACBHeader), VMW_CMDBUF_INLINE_ALIGN)) /** * struct vmw_cmdbuf_context - Command buffer context queues @@ -145,7 +146,7 @@ struct vmw_cmdbuf_header { SVGACBHeader *cb_header; SVGACBContext cb_context; struct list_head list; - struct drm_mm_node *node; + struct drm_mm_node node; dma_addr_t handle; u8 *cmd; size_t size; @@ -169,13 +170,13 @@ struct vmw_cmdbuf_dheader { * struct vmw_cmdbuf_alloc_info - Command buffer space allocation metadata * * @page_size: Size of requested command buffer space in pages. - * @node: The range manager node if allocation succeeded. - * @ret: Error code if failure. Otherwise 0. + * @node: Pointer to the range manager node. + * @done: True if this allocation has succeeded. */ struct vmw_cmdbuf_alloc_info { size_t page_size; struct drm_mm_node *node; - int ret; + bool done; }; /* Loop over each context in the command buffer manager. */ @@ -253,9 +254,7 @@ static void __vmw_cmdbuf_header_free(struct vmw_cmdbuf_header *header) return; } - drm_mm_remove_node(header->node); - kfree(header->node); - header->node = NULL; + drm_mm_remove_node(&header->node); wake_up_all(&man->alloc_queue); if (header->cb_header) dma_pool_free(man->headers, header->cb_header, @@ -669,32 +668,26 @@ static bool vmw_cmdbuf_try_alloc(struct vmw_cmdbuf_man *man, { int ret; - if (info->node) + if (info->done) return true; - - info->node = kzalloc(sizeof(*info->node), GFP_KERNEL); - if (!info->node) { - info->ret = -ENOMEM; - return true; - } - + + memset(info->node, 0, sizeof(*info->node)); spin_lock_bh(&man->lock); - ret = drm_mm_insert_node_generic(&man->mm, info->node, info->page_size, 0, 0, + ret = drm_mm_insert_node_generic(&man->mm, info->node, info->page_size, + 0, 0, DRM_MM_SEARCH_DEFAULT, DRM_MM_CREATE_DEFAULT); spin_unlock_bh(&man->lock); - if (ret) { - kfree(info->node); - info->node = NULL; - } + info->done = !ret; - return !!info->node; + return info->done; } /** * vmw_cmdbuf_alloc_space - Allocate buffer space from the main pool. * * @man: The command buffer manager. + * @node: Pointer to pre-allocated range-manager node. * @size: The size of the allocation. * @interruptible: Whether to sleep interruptible while waiting for space. * @@ -702,15 +695,16 @@ static bool vmw_cmdbuf_try_alloc(struct vmw_cmdbuf_man *man, * no space available ATM, it turns on IRQ handling and sleeps waiting for it to * become available. */ -static struct drm_mm_node *vmw_cmdbuf_alloc_space(struct vmw_cmdbuf_man *man, - size_t size, - bool interruptible) +int vmw_cmdbuf_alloc_space(struct vmw_cmdbuf_man *man, + struct drm_mm_node *node, + size_t size, + bool interruptible) { struct vmw_cmdbuf_alloc_info info; info.page_size = PAGE_ALIGN(size) >> PAGE_SHIFT; - info.node = NULL; - info.ret = 0; + info.node = node; + info.done = false; /* * To prevent starvation of large requests, only one allocating call @@ -718,22 +712,14 @@ static struct drm_mm_node *vmw_cmdbuf_alloc_space(struct vmw_cmdbuf_man *man, */ if (interruptible) { if (mutex_lock_interruptible(&man->space_mutex)) - return ERR_PTR(-ERESTARTSYS); + return -ERESTARTSYS; } else { mutex_lock(&man->space_mutex); } /* Try to allocate space without waiting. */ - (void) vmw_cmdbuf_try_alloc(man, &info); - if (info.ret && !info.node) { - mutex_unlock(&man->space_mutex); - return ERR_PTR(info.ret); - } - - if (info.node) { - mutex_unlock(&man->space_mutex); - return info.node; - } + if (vmw_cmdbuf_try_alloc(man, &info)) + goto out_unlock; vmw_generic_waiter_add(man->dev_priv, SVGA_IRQFLAG_COMMAND_BUFFER, @@ -749,7 +735,7 @@ static struct drm_mm_node *vmw_cmdbuf_alloc_space(struct vmw_cmdbuf_man *man, (man->dev_priv, SVGA_IRQFLAG_COMMAND_BUFFER, &man->dev_priv->cmdbuf_waiters); mutex_unlock(&man->space_mutex); - return ERR_PTR(ret); + return ret; } } else { wait_event(man->alloc_queue, vmw_cmdbuf_try_alloc(man, &info)); @@ -757,11 +743,11 @@ static struct drm_mm_node *vmw_cmdbuf_alloc_space(struct vmw_cmdbuf_man *man, vmw_generic_waiter_remove(man->dev_priv, SVGA_IRQFLAG_COMMAND_BUFFER, &man->dev_priv->cmdbuf_waiters); + +out_unlock: mutex_unlock(&man->space_mutex); - if (info.ret && !info.node) - return ERR_PTR(info.ret); - return info.node; + return 0; } /** @@ -785,10 +771,10 @@ static int vmw_cmdbuf_space_pool(struct vmw_cmdbuf_man *man, if (!man->has_pool) return -ENOMEM; - header->node = vmw_cmdbuf_alloc_space(man, size, interruptible); + ret = vmw_cmdbuf_alloc_space(man, &header->node, size, interruptible); - if (IS_ERR(header->node)) - return PTR_ERR(header->node); + if (ret) + return ret; header->cb_header = dma_pool_alloc(man->headers, GFP_KERNEL, &header->handle); @@ -797,9 +783,9 @@ static int vmw_cmdbuf_space_pool(struct vmw_cmdbuf_man *man, goto out_no_cb_header; } - header->size = header->node->size << PAGE_SHIFT; + header->size = header->node.size << PAGE_SHIFT; cb_hdr = header->cb_header; - offset = header->node->start << PAGE_SHIFT; + offset = header->node.start << PAGE_SHIFT; header->cmd = man->map + offset; memset(cb_hdr, 0, sizeof(*cb_hdr)); if (man->using_mob) { @@ -814,9 +800,8 @@ static int vmw_cmdbuf_space_pool(struct vmw_cmdbuf_man *man, out_no_cb_header: spin_lock_bh(&man->lock); - drm_mm_remove_node(header->node); + drm_mm_remove_node(&header->node); spin_unlock_bh(&man->lock); - kfree(header->node); return ret; }