From patchwork Fri Nov 27 12:05:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 11935945 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B86E0C63777 for ; Fri, 27 Nov 2020 12:11:49 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6E0ED20665 for ; Fri, 27 Nov 2020 12:11:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6E0ED20665 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 052EA6ECA5; Fri, 27 Nov 2020 12:10:10 +0000 (UTC) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by gabe.freedesktop.org (Postfix) with ESMTPS id C8C006EC6E; Fri, 27 Nov 2020 12:09:56 +0000 (UTC) IronPort-SDR: 3rFlWBZCJ73mQ9V3E+bh8tLXCKSN0t1/Dq7oWiyGjDc2XNbCxOv4HvLi2qTbVmwShJmthVOhBI 1z76pq2J07VA== X-IronPort-AV: E=McAfee;i="6000,8403,9817"; a="172540738" X-IronPort-AV: E=Sophos;i="5.78,374,1599548400"; d="scan'208";a="172540738" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Nov 2020 04:09:56 -0800 IronPort-SDR: btkUhOFqebfY2/vjJVgnCvMJfA9nCZneHmAKYGoKXLJkefouyeFODxF4ArPDD0Wsnop1frBvpL 057in5SF/kJA== X-IronPort-AV: E=Sophos;i="5.78,374,1599548400"; d="scan'208";a="548029149" Received: from mjgleeso-mobl.ger.corp.intel.com (HELO mwauld-desk1.ger.corp.intel.com) ([10.251.85.2]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Nov 2020 04:09:54 -0800 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [RFC PATCH 081/162] HAX drm/i915/lmem: support CPU relocations Date: Fri, 27 Nov 2020 12:05:57 +0000 Message-Id: <20201127120718.454037-82-matthew.auld@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201127120718.454037-1-matthew.auld@intel.com> References: <20201127120718.454037-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Abdiel Janulgue , dri-devel@lists.freedesktop.org, =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Rodrigo Vivi Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" ** DO NOT MERGE. RELOCATION SUPPORT WILL BE DROPPED FROM DG1+ ** Add LMEM support for the CPU reloc path. When doing relocations we have both a GPU and CPU reloc path, as well as some debugging options to force a particular path. The GPU reloc path is preferred when the object is not currently idle, otherwise we use the CPU reloc path. Since we can't kmap the object, and the mappable aperture might not be available, add support for mapping it through LMEMBAR. Signed-off-by: Matthew Auld Signed-off-by: Thomas Hellström Cc: Joonas Lahtinen Cc: Abdiel Janulgue Cc: Rodrigo Vivi --- .../gpu/drm/i915/gem/i915_gem_execbuffer.c | 53 +++++++++++++++++-- drivers/gpu/drm/i915/gem/i915_gem_lmem.c | 12 +++++ drivers/gpu/drm/i915/gem/i915_gem_lmem.h | 4 ++ 3 files changed, 65 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index 91f0c3fd9a4b..e73a761a7d1f 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -14,6 +14,7 @@ #include "display/intel_frontbuffer.h" #include "gem/i915_gem_ioctls.h" +#include "gem/i915_gem_lmem.h" #include "gt/intel_context.h" #include "gt/intel_gt.h" #include "gt/intel_gt_buffer_pool.h" @@ -278,6 +279,7 @@ struct i915_execbuffer { bool has_llc : 1; bool has_fence : 1; bool needs_unfenced : 1; + bool is_lmem : 1; struct i915_request *rq; u32 *rq_cmd; @@ -1049,6 +1051,7 @@ static void reloc_cache_init(struct reloc_cache *cache, cache->has_fence = cache->gen < 4; cache->needs_unfenced = INTEL_INFO(i915)->unfenced_needs_alignment; cache->node.flags = 0; + cache->is_lmem = false; reloc_cache_clear(cache); } @@ -1128,10 +1131,14 @@ static void reloc_cache_reset(struct reloc_cache *cache, struct i915_execbuffer } else { struct i915_ggtt *ggtt = cache_to_ggtt(cache); - intel_gt_flush_ggtt_writes(ggtt->vm.gt); + if (!cache->is_lmem) + intel_gt_flush_ggtt_writes(ggtt->vm.gt); io_mapping_unmap_atomic((void __iomem *)vaddr); - if (drm_mm_node_allocated(&cache->node)) { + if (cache->is_lmem) { + i915_gem_object_unpin_pages((struct drm_i915_gem_object *)cache->node.mm); + cache->is_lmem = false; + } else if (drm_mm_node_allocated(&cache->node)) { ggtt->vm.clear_range(&ggtt->vm, cache->node.start, cache->node.size); @@ -1184,6 +1191,40 @@ static void *reloc_kmap(struct drm_i915_gem_object *obj, return vaddr; } +static void *reloc_lmem(struct drm_i915_gem_object *obj, + struct reloc_cache *cache, + unsigned long page) +{ + void *vaddr; + int err; + + GEM_BUG_ON(use_cpu_reloc(cache, obj)); + + if (cache->vaddr) { + io_mapping_unmap_atomic((void __force __iomem *) unmask_page(cache->vaddr)); + } else { + err = i915_gem_object_pin_pages(obj); + if (err) + return ERR_PTR(err); + + err = i915_gem_object_set_to_wc_domain(obj, true); + if (err) { + i915_gem_object_unpin_pages(obj); + return ERR_PTR(err); + } + + cache->node.mm = (void *)obj; + cache->is_lmem = true; + } + + vaddr = i915_gem_object_lmem_io_map_page_atomic(obj, page); + + cache->vaddr = (unsigned long)vaddr; + cache->page = page; + + return vaddr; +} + static void *reloc_iomap(struct drm_i915_gem_object *obj, struct i915_execbuffer *eb, unsigned long page) @@ -1262,8 +1303,12 @@ static void *reloc_vaddr(struct drm_i915_gem_object *obj, vaddr = unmask_page(cache->vaddr); } else { vaddr = NULL; - if ((cache->vaddr & KMAP) == 0) - vaddr = reloc_iomap(obj, eb, page); + if ((cache->vaddr & KMAP) == 0) { + if (i915_gem_object_is_lmem(obj)) + vaddr = reloc_lmem(obj, cache, page); + else + vaddr = reloc_iomap(obj, eb, page); + } if (!vaddr) vaddr = reloc_kmap(obj, cache, page); } diff --git a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c index e953965f8263..f6c4d5998ff9 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c @@ -17,6 +17,18 @@ const struct drm_i915_gem_object_ops i915_gem_lmem_obj_ops = { .release = i915_gem_object_release_memory_region, }; +void __iomem * +i915_gem_object_lmem_io_map_page_atomic(struct drm_i915_gem_object *obj, + unsigned long n) +{ + resource_size_t offset; + + offset = i915_gem_object_get_dma_address(obj, n); + offset -= obj->mm.region->region.start; + + return io_mapping_map_atomic_wc(&obj->mm.region->iomap, offset); +} + bool i915_gem_object_is_lmem(struct drm_i915_gem_object *obj) { return obj->ops == &i915_gem_lmem_obj_ops; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_lmem.h b/drivers/gpu/drm/i915/gem/i915_gem_lmem.h index fc3f15580fe3..bf7e11fad17b 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_lmem.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_lmem.h @@ -14,6 +14,10 @@ struct intel_memory_region; extern const struct drm_i915_gem_object_ops i915_gem_lmem_obj_ops; +void __iomem * +i915_gem_object_lmem_io_map_page_atomic(struct drm_i915_gem_object *obj, + unsigned long n); + bool i915_gem_object_is_lmem(struct drm_i915_gem_object *obj); struct drm_i915_gem_object *