From patchwork Wed Mar 26 16:57:49 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Cavitt, Jonathan" X-Patchwork-Id: 14030361 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 24C1CC3600B for ; Wed, 26 Mar 2025 16:58:11 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3252810E757; Wed, 26 Mar 2025 16:58:10 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="THouGJYJ"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.13]) by gabe.freedesktop.org (Postfix) with ESMTPS id C121D10E752; Wed, 26 Mar 2025 16:57:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1743008273; x=1774544273; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=AONEFVoLJ5QWL0VxOTtd3jRO7+TDm2PO1hnDgQfNc/w=; b=THouGJYJWlTobzA5Ad6gl3+EiET/jCvdarz4Bp8970iozSRtDsrWqxO6 Ezzf8+0NakyQa8++WwiUYUk80WSWiz8ZXqMdcYk9HwJIWRo5uymQn3lEI 7Ga2ZRRkqGLat6RgnfchKIjQYBsxjp8dnv0ASr3mOZt7JHgPbtLm7+XjM UAxuh/BUKwt22/2hYJ3Ae3eD+KO70xSLY9hLx2OIJINGoB1baxH3af6Wj im0zOEEDpL67Xl+fEAoISoeOzzOlrAeLGlqtn4rocCWU9xGH4G4ZA30BP MYFLE7sWhv69AWBZKEMQINsDsMgVD8O5gmTiXG4gNYLqO0FB3m3bROkmO w==; X-CSE-ConnectionGUID: 1DQO/94FTxiIP5+LyVSBhA== X-CSE-MsgGUID: 0Rp/b7M7RKmCCfOwn9qMbg== X-IronPort-AV: E=McAfee;i="6700,10204,11385"; a="55308958" X-IronPort-AV: E=Sophos;i="6.14,278,1736841600"; d="scan'208";a="55308958" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by orvoesa105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Mar 2025 09:57:53 -0700 X-CSE-ConnectionGUID: wr0HA1ZuQmCWZCzR0O9/Lg== X-CSE-MsgGUID: W7I74JPYQKqI815kDfkOWw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.14,278,1736841600"; d="scan'208";a="155739767" Received: from dut4419lnl.fm.intel.com ([10.105.10.235]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Mar 2025 09:57:52 -0700 From: Jonathan Cavitt To: intel-xe@lists.freedesktop.org Cc: saurabhg.gupta@intel.com, alex.zuo@intel.com, jonathan.cavitt@intel.com, joonas.lahtinen@linux.intel.com, matthew.brost@intel.com, jianxun.zhang@intel.com, shuicheng.lin@intel.com, dri-devel@lists.freedesktop.org, Michal.Wajdeczko@intel.com, michal.mrozek@intel.com, raag.jadav@intel.com, john.c.harrison@intel.com Subject: [PATCH v14 5/6] drm/xe/xe_vm: Add per VM fault info Date: Wed, 26 Mar 2025 16:57:49 +0000 Message-ID: <20250326165751.72881-6-jonathan.cavitt@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250326165751.72881-1-jonathan.cavitt@intel.com> References: <20250326165751.72881-1-jonathan.cavitt@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Add additional information to each VM so they can report up to the first 50 seen faults. Only pagefaults are saved this way currently, though in the future, all faults should be tracked by the VM for future reporting. Additionally, of the pagefaults reported, only failed pagefaults are saved this way, as successful pagefaults should recover silently and not need to be reported to userspace. v2: - Free vm after use (Shuicheng) - Compress pf copy logic (Shuicheng) - Update fault_unsuccessful before storing (Shuicheng) - Fix old struct name in comments (Shuicheng) - Keep first 50 pagefaults instead of last 50 (Jianxun) v3: - Avoid unnecessary execution by checking MAX_PFS earlier (jcavitt) - Fix double-locking error (jcavitt) - Assert kmemdump is successful (Shuicheng) v4: - Rename xe_vm.pfs to xe_vm.faults (jcavitt) - Store fault data and not pagefault in xe_vm faults list (jcavitt) - Store address, address type, and address precision per fault (jcavitt) - Store engine class and instance data per fault (Jianxun) - Add and fix kernel docs (Michal W) - Properly handle kzalloc error (Michal W) - s/MAX_PFS/MAX_FAULTS_SAVED_PER_VM (Michal W) - Store fault level per fault (Micahl M) v5: - Store fault and access type instead of address type (Jianxun) v6: - Store pagefaults in non-fault-mode VMs as well (Jianxun) v7: - Fix kernel docs and comments (Michal W) v8: - Fix double-locking issue (Jianxun) Signed-off-by: Jonathan Cavitt Suggested-by: Matthew Brost Cc: Shuicheng Lin Cc: Jianxun Zhang Cc: Michal Wajdeczko Cc: Michal Mzorek --- drivers/gpu/drm/xe/xe_gt_pagefault.c | 26 +++++++++++ drivers/gpu/drm/xe/xe_vm.c | 64 ++++++++++++++++++++++++++++ drivers/gpu/drm/xe/xe_vm.h | 9 ++++ drivers/gpu/drm/xe/xe_vm_types.h | 32 ++++++++++++++ 4 files changed, 131 insertions(+) diff --git a/drivers/gpu/drm/xe/xe_gt_pagefault.c b/drivers/gpu/drm/xe/xe_gt_pagefault.c index 0cedf089a3f2..b7ad55682de3 100644 --- a/drivers/gpu/drm/xe/xe_gt_pagefault.c +++ b/drivers/gpu/drm/xe/xe_gt_pagefault.c @@ -345,6 +345,31 @@ int xe_guc_pagefault_handler(struct xe_guc *guc, u32 *msg, u32 len) return full ? -ENOSPC : 0; } +static void save_pagefault_to_vm(struct xe_device *xe, struct xe_gt_pagefault *pf) +{ + struct xe_vm *vm; + + /* + * Pagefault may be associated to VM that is not in fault mode. + * Perform asid_to_vm behavior, except if vm is not in fault + * mode, return the VM anyways. + */ + down_read(&xe->usm.lock); + vm = xa_load(&xe->usm.asid_to_vm, pf->asid); + if (vm) + xe_vm_get(vm); + else + vm = ERR_PTR(-EINVAL); + up_read(&xe->usm.lock); + + if (IS_ERR(vm)) + return; + + xe_vm_add_fault_entry_pf(vm, pf); + + xe_vm_put(vm); +} + #define USM_QUEUE_MAX_RUNTIME_MS 20 static void pf_queue_work_func(struct work_struct *w) @@ -364,6 +389,7 @@ static void pf_queue_work_func(struct work_struct *w) if (unlikely(ret)) { print_pagefault(xe, &pf); pf.fault_unsuccessful = 1; + save_pagefault_to_vm(xe, &pf); drm_dbg(&xe->drm, "Fault response: Unsuccessful %d\n", ret); } diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c index 793f5bc393c2..d1ddf1f03e3f 100644 --- a/drivers/gpu/drm/xe/xe_vm.c +++ b/drivers/gpu/drm/xe/xe_vm.c @@ -28,6 +28,7 @@ #include "xe_drm_client.h" #include "xe_exec_queue.h" #include "xe_gt_pagefault.h" +#include "xe_gt_pagefault_types.h" #include "xe_gt_tlb_invalidation.h" #include "xe_migrate.h" #include "xe_pat.h" @@ -778,6 +779,64 @@ int xe_vm_userptr_check_repin(struct xe_vm *vm) list_empty_careful(&vm->userptr.invalidated)) ? 0 : -EAGAIN; } +/** + * xe_vm_add_fault_entry_pf() - Add pagefault to vm fault list + * @vm: The VM. + * @pf: The pagefault. + * + * This function takes the data from the pagefault @pf and saves it to @vm->faults.list. + * + * The function exits silently if the list is full, and reports a warning if the pagefault + * could not be saved to the list. + */ +void xe_vm_add_fault_entry_pf(struct xe_vm *vm, struct xe_gt_pagefault *pf) +{ + struct xe_vm_fault_entry *e = NULL; + + spin_lock(&vm->faults.lock); + + /* + * Limit the number of faults in the fault list to prevent + * memory overuse. + */ + if (vm->faults.len >= MAX_FAULTS_SAVED_PER_VM) + goto out; + + e = kzalloc(sizeof(*e), GFP_KERNEL); + if (!e) { + drm_warn(&vm->xe->drm, + "Could not allocate memory for fault %i!", + vm->faults.len); + goto out; + } + + e->address = pf->page_addr; + e->address_precision = 1; + e->access_type = pf->access_type; + e->fault_type = pf->fault_type; + e->fault_level = pf->fault_level; + e->engine_class = pf->engine_class; + e->engine_instance = pf->engine_instance; + + list_add_tail(&e->list, &vm->faults.list); + vm->faults.len++; +out: + spin_unlock(&vm->faults.lock); +} + +static void xe_vm_clear_fault_entries(struct xe_vm *vm) +{ + struct xe_vm_fault_entry *e, *tmp; + + spin_lock(&vm->faults.lock); + list_for_each_entry_safe(e, tmp, &vm->faults.list, list) { + list_del(&e->list); + kfree(e); + } + vm->faults.len = 0; + spin_unlock(&vm->faults.lock); +} + static int xe_vma_ops_alloc(struct xe_vma_ops *vops, bool array_of_binds) { int i; @@ -1660,6 +1719,9 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags) init_rwsem(&vm->userptr.notifier_lock); spin_lock_init(&vm->userptr.invalidated_lock); + INIT_LIST_HEAD(&vm->faults.list); + spin_lock_init(&vm->faults.lock); + ttm_lru_bulk_move_init(&vm->lru_bulk_move); INIT_WORK(&vm->destroy_work, vm_destroy_work_func); @@ -1930,6 +1992,8 @@ void xe_vm_close_and_put(struct xe_vm *vm) } up_write(&xe->usm.lock); + xe_vm_clear_fault_entries(vm); + for_each_tile(tile, xe, id) xe_range_fence_tree_fini(&vm->rftree[id]); diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h index 0ef811fc2bde..9bd7e93824da 100644 --- a/drivers/gpu/drm/xe/xe_vm.h +++ b/drivers/gpu/drm/xe/xe_vm.h @@ -12,6 +12,12 @@ #include "xe_map.h" #include "xe_vm_types.h" +/** + * MAX_FAULTS_SAVED_PER_VM - Maximum number of faults each vm can store before future + * faults are discarded to prevent memory overuse + */ +#define MAX_FAULTS_SAVED_PER_VM 50 + struct drm_device; struct drm_printer; struct drm_file; @@ -22,6 +28,7 @@ struct dma_fence; struct xe_exec_queue; struct xe_file; +struct xe_gt_pagefault; struct xe_sync_entry; struct xe_svm_range; struct drm_exec; @@ -257,6 +264,8 @@ int xe_vma_userptr_pin_pages(struct xe_userptr_vma *uvma); int xe_vma_userptr_check_repin(struct xe_userptr_vma *uvma); +void xe_vm_add_fault_entry_pf(struct xe_vm *vm, struct xe_gt_pagefault *pf); + bool xe_vm_validate_should_retry(struct drm_exec *exec, int err, ktime_t *end); int xe_vm_lock_vma(struct drm_exec *exec, struct xe_vma *vma); diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h index 84fa41b9fa20..5e9bb0052f69 100644 --- a/drivers/gpu/drm/xe/xe_vm_types.h +++ b/drivers/gpu/drm/xe/xe_vm_types.h @@ -19,6 +19,7 @@ #include "xe_range_fence.h" struct xe_bo; +struct xe_pagefault; struct xe_svm_range; struct xe_sync_entry; struct xe_user_fence; @@ -142,6 +143,27 @@ struct xe_userptr_vma { struct xe_device; +/** + * struct xe_vm_fault_entry - Elements of vm->faults.list + * @list: link into @xe_vm.faults.list + * @address: address of the fault + * @address_type: type of address access that resulted in fault + * @address_precision: precision of faulted address + * @fault_level: fault level of the fault + * @engine_class: class of engine fault was reported on + * @engine_instance: instance of engine fault was reported on + */ +struct xe_vm_fault_entry { + struct list_head list; + u64 address; + u32 address_precision; + u8 access_type; + u8 fault_type; + u8 fault_level; + u8 engine_class; + u8 engine_instance; +}; + struct xe_vm { /** @gpuvm: base GPUVM used to track VMAs */ struct drm_gpuvm gpuvm; @@ -305,6 +327,16 @@ struct xe_vm { bool capture_once; } error_capture; + /** @faults: List of all faults associated with this VM */ + struct { + /** @faults.lock: lock protecting @faults.list */ + spinlock_t lock; + /** @faults.list: list of xe_vm_fault_entry entries */ + struct list_head list; + /** @faults.len: length of @faults.list */ + unsigned int len; + } faults; + /** * @tlb_flush_seqno: Required TLB flush seqno for the next exec. * protected by the vm resv.