From patchwork Tue Jun 18 06:24:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Thomas_Hellstr=C3=B6m_=28VMware=29?= X-Patchwork-Id: 11000939 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 62238112C for ; Tue, 18 Jun 2019 06:25:22 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 11DA8205A4 for ; Tue, 18 Jun 2019 06:25:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 038B928A0D; Tue, 18 Jun 2019 06:25:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 17E7E28A08 for ; Tue, 18 Jun 2019 06:25:21 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B46F96E0E2; Tue, 18 Jun 2019 06:25:15 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from pio-pvt-msa2.bahnhof.se (pio-pvt-msa2.bahnhof.se [79.136.2.41]) by gabe.freedesktop.org (Postfix) with ESMTPS id 635CD6E0DA for ; Tue, 18 Jun 2019 06:25:13 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by pio-pvt-msa2.bahnhof.se (Postfix) with ESMTP id F214B3F5C8; Tue, 18 Jun 2019 08:25:11 +0200 (CEST) X-Virus-Scanned: Debian amavisd-new at bahnhof.se Received: from pio-pvt-msa2.bahnhof.se ([127.0.0.1]) by localhost (pio-pvt-msa2.bahnhof.se [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id h_Cy6vP8Y8xH; Tue, 18 Jun 2019 08:24:57 +0200 (CEST) Received: from mail1.shipmail.org (h-205-35.A357.priv.bahnhof.se [155.4.205.35]) (Authenticated sender: mb878879) by pio-pvt-msa2.bahnhof.se (Postfix) with ESMTPA id 73EFA3F54C; Tue, 18 Jun 2019 08:24:57 +0200 (CEST) Received: from localhost.localdomain.localdomain (h-205-35.A357.priv.bahnhof.se [155.4.205.35]) by mail1.shipmail.org (Postfix) with ESMTPSA id E7065360193; Tue, 18 Jun 2019 08:24:56 +0200 (CEST) From: =?utf-8?q?Thomas_Hellstr=C3=B6m_=28VMware=29?= To: dri-devel@lists.freedesktop.org Subject: [PATCH 1/4] drm/vmwgfx: Assign eviction priorities to resources Date: Tue, 18 Jun 2019 08:24:39 +0200 Message-Id: <20190618062442.14647-1-thomas@shipmail.org> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=shipmail.org; s=mail; t=1560839097; bh=5KUurhPEpbb1q2InFrNyrtX4R3S5z11kbVtVF0QMLvg=; h=From:To:Cc:Subject:Date:From; b=nGvoGx+pJbXj31na53GCCRjOG4A7kgVbF5EnV8/5JX44rDlziUkygJ/UTzujcioBi qwhoIk1UEjWhWnmX1tUrZmU77jP48o5uHkV0q9CaRij2qNe65WInzVNaD2xlJ5jJFp rTWVUtF/aG+TD4JIfyCc/XpNNkUvqtQY05ixfMFU= X-Mailman-Original-Authentication-Results: pio-pvt-msa2.bahnhof.se; dkim=pass (1024-bit key; unprotected) header.d=shipmail.org header.i=@shipmail.org header.b=nGvoGx+p; dkim-atps=neutral X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: pv-drivers@vmware.com, Thomas Hellstrom , Deepak Rawat , linux-graphics-maintainer@vmware.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Thomas Hellstrom TTM provides a means to assign eviction priorities to buffer object. This means that all buffer objects with a lower priority will be evicted first on memory pressure. Use this to make sure surfaces and in particular non-dirty surfaces are evicted first. Evicting in particular shaders, cotables and contexts imply a significant performance hit on vmwgfx, so make sure these resources are evicted last. Some buffer objects are sub-allocated in user-space which means we can have many resources attached to a single buffer object or resource. In that case the buffer object is given the highest priority of the attached resources. Signed-off-by: Thomas Hellstrom Reviewed-by: Deepak Rawat Reviewed-by: Emil Velikov Acked-by: Emil Velikov --- drivers/gpu/drm/vmwgfx/vmwgfx_bo.c | 2 + drivers/gpu/drm/vmwgfx/vmwgfx_context.c | 4 ++ drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c | 13 ++-- drivers/gpu/drm/vmwgfx/vmwgfx_drv.h | 72 +++++++++++++++++++ drivers/gpu/drm/vmwgfx/vmwgfx_resource.c | 56 +++++++++++---- drivers/gpu/drm/vmwgfx/vmwgfx_resource_priv.h | 2 + drivers/gpu/drm/vmwgfx/vmwgfx_shader.c | 8 ++- drivers/gpu/drm/vmwgfx/vmwgfx_surface.c | 4 ++ 8 files changed, 141 insertions(+), 20 deletions(-) diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c index 5d5c2bce01f3..c0829d50eecc 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c @@ -510,6 +510,8 @@ int vmw_bo_init(struct vmw_private *dev_priv, acc_size = vmw_bo_acc_size(dev_priv, size, user); memset(vmw_bo, 0, sizeof(*vmw_bo)); + BUILD_BUG_ON(TTM_MAX_BO_PRIORITY <= 3); + vmw_bo->base.priority = 3; INIT_LIST_HEAD(&vmw_bo->res_list); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_context.c b/drivers/gpu/drm/vmwgfx/vmwgfx_context.c index 63f111068a44..a56c9d802382 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_context.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_context.c @@ -88,6 +88,8 @@ static const struct vmw_res_func vmw_gb_context_func = { .res_type = vmw_res_context, .needs_backup = true, .may_evict = true, + .prio = 3, + .dirty_prio = 3, .type_name = "guest backed contexts", .backup_placement = &vmw_mob_placement, .create = vmw_gb_context_create, @@ -100,6 +102,8 @@ static const struct vmw_res_func vmw_dx_context_func = { .res_type = vmw_res_dx_context, .needs_backup = true, .may_evict = true, + .prio = 3, + .dirty_prio = 3, .type_name = "dx contexts", .backup_placement = &vmw_mob_placement, .create = vmw_dx_context_create, diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c b/drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c index b4f6e1217c9d..8c699cb2565b 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c @@ -116,6 +116,8 @@ static const struct vmw_res_func vmw_cotable_func = { .res_type = vmw_res_cotable, .needs_backup = true, .may_evict = true, + .prio = 3, + .dirty_prio = 3, .type_name = "context guest backed object tables", .backup_placement = &vmw_mob_placement, .create = vmw_cotable_create, @@ -307,7 +309,7 @@ static int vmw_cotable_unbind(struct vmw_resource *res, struct ttm_buffer_object *bo = val_buf->bo; struct vmw_fence_obj *fence; - if (list_empty(&res->mob_head)) + if (!vmw_resource_mob_attached(res)) return 0; WARN_ON_ONCE(bo->mem.mem_type != VMW_PL_MOB); @@ -453,6 +455,7 @@ static int vmw_cotable_resize(struct vmw_resource *res, size_t new_size) goto out_wait; } + vmw_resource_mob_detach(res); res->backup = buf; res->backup_size = new_size; vcotbl->size_read_back = cur_size_read_back; @@ -467,12 +470,12 @@ static int vmw_cotable_resize(struct vmw_resource *res, size_t new_size) res->backup = old_buf; res->backup_size = old_size; vcotbl->size_read_back = old_size_read_back; + vmw_resource_mob_attach(res); goto out_wait; } + vmw_resource_mob_attach(res); /* Let go of the old mob. */ - list_del(&res->mob_head); - list_add_tail(&res->mob_head, &buf->res_list); vmw_bo_unreference(&old_buf); res->id = vcotbl->type; @@ -496,7 +499,7 @@ static int vmw_cotable_resize(struct vmw_resource *res, size_t new_size) * is called before bind() in the validation sequence is instead used for two * things. * 1) Unscrub the cotable if it is scrubbed and still attached to a backup - * buffer, that is, if @res->mob_head is non-empty. + * buffer. * 2) Resize the cotable if needed. */ static int vmw_cotable_create(struct vmw_resource *res) @@ -512,7 +515,7 @@ static int vmw_cotable_create(struct vmw_resource *res) new_size *= 2; if (likely(new_size <= res->backup_size)) { - if (vcotbl->scrubbed && !list_empty(&res->mob_head)) { + if (vcotbl->scrubbed && vmw_resource_mob_attached(res)) { ret = vmw_cotable_unscrub(res); if (ret) return ret; diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h index 366dcfc1f9bb..7c935c72d368 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h @@ -86,6 +86,15 @@ struct vmw_fpriv { bool gb_aware; /* user-space is guest-backed aware */ }; +/** + * struct vmw_buffer_object - TTM buffer object with vmwgfx additions + * @base: The TTM buffer object + * @res_list: List of resources using this buffer object as a backing MOB + * @pin_count: pin depth + * @dx_query_ctx: DX context if this buffer object is used as a DX query MOB + * @map: Kmap object for semi-persistent mappings + * @res_prios: Eviction priority counts for attached resources + */ struct vmw_buffer_object { struct ttm_buffer_object base; struct list_head res_list; @@ -94,6 +103,7 @@ struct vmw_buffer_object { struct vmw_resource *dx_query_ctx; /* Protected by reservation */ struct ttm_bo_kmap_obj map; + u32 res_prios[TTM_MAX_BO_PRIORITY]; }; /** @@ -145,6 +155,7 @@ struct vmw_resource { struct kref kref; struct vmw_private *dev_priv; int id; + u32 used_prio; unsigned long backup_size; bool res_dirty; bool backup_dirty; @@ -709,6 +720,19 @@ extern void vmw_query_move_notify(struct ttm_buffer_object *bo, extern int vmw_query_readback_all(struct vmw_buffer_object *dx_query_mob); extern void vmw_resource_evict_all(struct vmw_private *dev_priv); extern void vmw_resource_unbind_list(struct vmw_buffer_object *vbo); +void vmw_resource_mob_attach(struct vmw_resource *res); +void vmw_resource_mob_detach(struct vmw_resource *res); + +/** + * vmw_resource_mob_attached - Whether a resource currently has a mob attached + * @res: The resource + * + * Return: true if the resource has a mob attached, false otherwise. + */ +static inline bool vmw_resource_mob_attached(const struct vmw_resource *res) +{ + return !list_empty(&res->mob_head); +} /** * vmw_user_resource_noref_release - release a user resource pointer looked up @@ -787,6 +811,54 @@ static inline void vmw_user_bo_noref_release(void) ttm_base_object_noref_release(); } +/** + * vmw_bo_adjust_prio - Adjust the buffer object eviction priority + * according to attached resources + * @vbo: The struct vmw_buffer_object + */ +static inline void vmw_bo_prio_adjust(struct vmw_buffer_object *vbo) +{ + int i = ARRAY_SIZE(vbo->res_prios); + + while (i--) { + if (vbo->res_prios[i]) { + vbo->base.priority = i; + return; + } + } + + vbo->base.priority = 3; +} + +/** + * vmw_bo_prio_add - Notify a buffer object of a newly attached resource + * eviction priority + * @vbo: The struct vmw_buffer_object + * @prio: The resource priority + * + * After being notified, the code assigns the highest resource eviction priority + * to the backing buffer object (mob). + */ +static inline void vmw_bo_prio_add(struct vmw_buffer_object *vbo, int prio) +{ + if (vbo->res_prios[prio]++ == 0) + vmw_bo_prio_adjust(vbo); +} + +/** + * vmw_bo_prio_del - Notify a buffer object of a resource with a certain + * priority being removed + * @vbo: The struct vmw_buffer_object + * @prio: The resource priority + * + * After being notified, the code assigns the highest resource eviction priority + * to the backing buffer object (mob). + */ +static inline void vmw_bo_prio_del(struct vmw_buffer_object *vbo, int prio) +{ + if (--vbo->res_prios[prio] == 0) + vmw_bo_prio_adjust(vbo); +} /** * Misc Ioctl functionality - vmwgfx_ioctl.c diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c index 1d38a8b2f2ec..be7d4149a129 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c @@ -34,6 +34,37 @@ #define VMW_RES_EVICT_ERR_COUNT 10 +/** + * vmw_resource_mob_attach - Mark a resource as attached to its backing mob + * @res: The resource + */ +void vmw_resource_mob_attach(struct vmw_resource *res) +{ + struct vmw_buffer_object *backup = res->backup; + + lockdep_assert_held(&backup->base.resv->lock.base); + res->used_prio = (res->res_dirty) ? res->func->dirty_prio : + res->func->prio; + list_add_tail(&res->mob_head, &backup->res_list); + vmw_bo_prio_add(backup, res->used_prio); +} + +/** + * vmw_resource_mob_detach - Mark a resource as detached from its backing mob + * @res: The resource + */ +void vmw_resource_mob_detach(struct vmw_resource *res) +{ + struct vmw_buffer_object *backup = res->backup; + + lockdep_assert_held(&backup->base.resv->lock.base); + if (vmw_resource_mob_attached(res)) { + list_del_init(&res->mob_head); + vmw_bo_prio_del(backup, res->used_prio); + } +} + + struct vmw_resource *vmw_resource_reference(struct vmw_resource *res) { kref_get(&res->kref); @@ -80,7 +111,7 @@ static void vmw_resource_release(struct kref *kref) struct ttm_buffer_object *bo = &res->backup->base; ttm_bo_reserve(bo, false, false, NULL); - if (!list_empty(&res->mob_head) && + if (vmw_resource_mob_attached(res) && res->func->unbind != NULL) { struct ttm_validate_buffer val_buf; @@ -89,7 +120,7 @@ static void vmw_resource_release(struct kref *kref) res->func->unbind(res, false, &val_buf); } res->backup_dirty = false; - list_del_init(&res->mob_head); + vmw_resource_mob_detach(res); ttm_bo_unreserve(bo); vmw_bo_unreference(&res->backup); } @@ -179,6 +210,7 @@ int vmw_resource_init(struct vmw_private *dev_priv, struct vmw_resource *res, res->backup_offset = 0; res->backup_dirty = false; res->res_dirty = false; + res->used_prio = 3; if (delay_id) return 0; else @@ -355,14 +387,14 @@ static int vmw_resource_do_validate(struct vmw_resource *res, } if (func->bind && - ((func->needs_backup && list_empty(&res->mob_head) && + ((func->needs_backup && !vmw_resource_mob_attached(res) && val_buf->bo != NULL) || (!func->needs_backup && val_buf->bo != NULL))) { ret = func->bind(res, val_buf); if (unlikely(ret != 0)) goto out_bind_failed; if (func->needs_backup) - list_add_tail(&res->mob_head, &res->backup->res_list); + vmw_resource_mob_attach(res); } return 0; @@ -402,15 +434,13 @@ void vmw_resource_unreserve(struct vmw_resource *res, if (switch_backup && new_backup != res->backup) { if (res->backup) { - lockdep_assert_held(&res->backup->base.resv->lock.base); - list_del_init(&res->mob_head); + vmw_resource_mob_detach(res); vmw_bo_unreference(&res->backup); } if (new_backup) { res->backup = vmw_bo_reference(new_backup); - lockdep_assert_held(&new_backup->base.resv->lock.base); - list_add_tail(&res->mob_head, &new_backup->res_list); + vmw_resource_mob_attach(res); } else { res->backup = NULL; } @@ -469,7 +499,7 @@ vmw_resource_check_buffer(struct ww_acquire_ctx *ticket, if (unlikely(ret != 0)) goto out_no_reserve; - if (res->func->needs_backup && list_empty(&res->mob_head)) + if (res->func->needs_backup && !vmw_resource_mob_attached(res)) return 0; backup_dirty = res->backup_dirty; @@ -574,11 +604,11 @@ static int vmw_resource_do_evict(struct ww_acquire_ctx *ticket, return ret; if (unlikely(func->unbind != NULL && - (!func->needs_backup || !list_empty(&res->mob_head)))) { + (!func->needs_backup || vmw_resource_mob_attached(res)))) { ret = func->unbind(res, res->res_dirty, &val_buf); if (unlikely(ret != 0)) goto out_no_unbind; - list_del_init(&res->mob_head); + vmw_resource_mob_detach(res); } ret = func->destroy(res); res->backup_dirty = true; @@ -660,7 +690,7 @@ int vmw_resource_validate(struct vmw_resource *res, bool intr) if (unlikely(ret != 0)) goto out_no_validate; else if (!res->func->needs_backup && res->backup) { - list_del_init(&res->mob_head); + WARN_ON_ONCE(vmw_resource_mob_attached(res)); vmw_bo_unreference(&res->backup); } @@ -699,7 +729,7 @@ void vmw_resource_unbind_list(struct vmw_buffer_object *vbo) (void) res->func->unbind(res, res->res_dirty, &val_buf); res->backup_dirty = true; res->res_dirty = false; - list_del_init(&res->mob_head); + vmw_resource_mob_detach(res); } (void) ttm_bo_wait(&vbo->base, false, false); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_resource_priv.h b/drivers/gpu/drm/vmwgfx/vmwgfx_resource_priv.h index 7e19eba0b0b8..984e588c62ca 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_resource_priv.h +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_resource_priv.h @@ -78,6 +78,8 @@ struct vmw_res_func { const char *type_name; struct ttm_placement *backup_placement; bool may_evict; + u32 prio; + u32 dirty_prio; int (*create) (struct vmw_resource *res); int (*destroy) (struct vmw_resource *res); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c b/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c index d310d21f0d54..e139fdfd1635 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c @@ -95,6 +95,8 @@ static const struct vmw_res_func vmw_gb_shader_func = { .res_type = vmw_res_shader, .needs_backup = true, .may_evict = true, + .prio = 3, + .dirty_prio = 3, .type_name = "guest backed shaders", .backup_placement = &vmw_mob_placement, .create = vmw_gb_shader_create, @@ -106,7 +108,9 @@ static const struct vmw_res_func vmw_gb_shader_func = { static const struct vmw_res_func vmw_dx_shader_func = { .res_type = vmw_res_shader, .needs_backup = true, - .may_evict = false, + .may_evict = true, + .prio = 3, + .dirty_prio = 3, .type_name = "dx shaders", .backup_placement = &vmw_mob_placement, .create = vmw_dx_shader_create, @@ -423,7 +427,7 @@ static int vmw_dx_shader_create(struct vmw_resource *res) WARN_ON_ONCE(!shader->committed); - if (!list_empty(&res->mob_head)) { + if (vmw_resource_mob_attached(res)) { mutex_lock(&dev_priv->binding_mutex); ret = vmw_dx_shader_unscrub(res); mutex_unlock(&dev_priv->binding_mutex); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c b/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c index 219471903bc1..c40d44f4d9af 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c @@ -112,6 +112,8 @@ static const struct vmw_res_func vmw_legacy_surface_func = { .res_type = vmw_res_surface, .needs_backup = false, .may_evict = true, + .prio = 1, + .dirty_prio = 1, .type_name = "legacy surfaces", .backup_placement = &vmw_srf_placement, .create = &vmw_legacy_srf_create, @@ -124,6 +126,8 @@ static const struct vmw_res_func vmw_gb_surface_func = { .res_type = vmw_res_surface, .needs_backup = true, .may_evict = true, + .prio = 1, + .dirty_prio = 2, .type_name = "guest backed surfaces", .backup_placement = &vmw_mob_placement, .create = vmw_gb_surface_create,