From patchwork Fri Mar 7 22:41:20 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Cavitt X-Patchwork-Id: 14007264 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E9A49C35FF1 for ; Fri, 7 Mar 2025 22:41:35 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 0C5D310EC56; Fri, 7 Mar 2025 22:41:31 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="MDaA04FF"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.18]) by gabe.freedesktop.org (Postfix) with ESMTPS id B287710EC4E; Fri, 7 Mar 2025 22:41:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1741387287; x=1772923287; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=zRC7H4cC6sNb29jEFb7rfdw7OLmZ4f7I74F62LPVQE0=; b=MDaA04FF1SHb4ehb9Y+PgcgKac0i8+JpbXuEKjQ5RRUy0p6aNgEpMRWd Dh+3FO8nK/KaU+htL6fym5mXQ2D78zcNl6We2fYSi1PzI0uqylJNCCVNW N2/TgSZjAFYNxsdeWaErgXFuqnYjxqguHH67ApwvL74TJYZgmMs8o0mbn FEcEH5A5SPwdEtIcnuSFOymr/UsLo/MISWXuK9DJN2tvkj2ffd1ivM53Y +/fT0F1FpaIAq6ak7ftg9HPchPgkC+I2OZdxkNKiN5S+jgI/DY7/6WVKj IbinKVBTAkuwxDi9NS2RYyMTHLP/kUa6DBZE/JXtcdUan6RsijmKKNcmH g==; X-CSE-ConnectionGUID: KzV1kG96RheWe/F2a3wFuA== X-CSE-MsgGUID: GxPd4IdCT9O20xmuitceJg== X-IronPort-AV: E=McAfee;i="6700,10204,11366"; a="41704517" X-IronPort-AV: E=Sophos;i="6.14,230,1736841600"; d="scan'208";a="41704517" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Mar 2025 14:41:26 -0800 X-CSE-ConnectionGUID: YooH/cz5RDSVhVO38Cm4bw== X-CSE-MsgGUID: nevDqW6kSseBfL8Vv6wFVg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.14,230,1736841600"; d="scan'208";a="124457946" Received: from dut4440lnl.fm.intel.com ([10.105.10.114]) by orviesa004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Mar 2025 14:41:26 -0800 From: Jonathan Cavitt To: intel-xe@lists.freedesktop.org Cc: saurabhg.gupta@intel.com, alex.zuo@intel.com, jonathan.cavitt@intel.com, joonas.lahtinen@linux.intel.com, matthew.brost@intel.com, jianxun.zhang@intel.com, shuicheng.lin@intel.com, dri-devel@lists.freedesktop.org Subject: [PATCH v6 2/6] drm/xe/xe_gt_pagefault: Migrate pagefault struct to header Date: Fri, 7 Mar 2025 22:41:20 +0000 Message-ID: <20250307224125.111430-3-jonathan.cavitt@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250307224125.111430-1-jonathan.cavitt@intel.com> References: <20250307224125.111430-1-jonathan.cavitt@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Migrate the pagefault struct from xe_gt_pagefault.c to the xe_gt_pagefault.h header file, along with the associated enum values. v2: Normalize names for common header (Matt Brost) Signed-off-by: Jonathan Cavitt --- drivers/gpu/drm/xe/xe_gt_pagefault.c | 41 +++++----------------------- drivers/gpu/drm/xe/xe_gt_pagefault.h | 28 +++++++++++++++++++ 2 files changed, 35 insertions(+), 34 deletions(-) diff --git a/drivers/gpu/drm/xe/xe_gt_pagefault.c b/drivers/gpu/drm/xe/xe_gt_pagefault.c index a4e688e72efd..c8a9058aa09f 100644 --- a/drivers/gpu/drm/xe/xe_gt_pagefault.c +++ b/drivers/gpu/drm/xe/xe_gt_pagefault.c @@ -23,33 +23,6 @@ #include "xe_trace_bo.h" #include "xe_vm.h" -struct pagefault { - u64 page_addr; - u32 asid; - u16 pdata; - u8 vfid; - u8 access_type; - u8 fault_type; - u8 fault_level; - u8 engine_class; - u8 engine_instance; - u8 fault_unsuccessful; - bool trva_fault; -}; - -enum access_type { - ACCESS_TYPE_READ = 0, - ACCESS_TYPE_WRITE = 1, - ACCESS_TYPE_ATOMIC = 2, - ACCESS_TYPE_RESERVED = 3, -}; - -enum fault_type { - NOT_PRESENT = 0, - WRITE_ACCESS_VIOLATION = 1, - ATOMIC_ACCESS_VIOLATION = 2, -}; - struct acc { u64 va_range_base; u32 asid; @@ -61,9 +34,9 @@ struct acc { u8 engine_instance; }; -static bool access_is_atomic(enum access_type access_type) +static bool access_is_atomic(enum xe_pagefault_access_type access_type) { - return access_type == ACCESS_TYPE_ATOMIC; + return access_type == XE_PAGEFAULT_ACCESS_TYPE_ATOMIC; } static bool vma_is_valid(struct xe_tile *tile, struct xe_vma *vma) @@ -205,7 +178,7 @@ static struct xe_vm *asid_to_vm(struct xe_device *xe, u32 asid) return vm; } -static int handle_pagefault(struct xe_gt *gt, struct pagefault *pf) +static int handle_pagefault(struct xe_gt *gt, struct xe_pagefault *pf) { struct xe_device *xe = gt_to_xe(gt); struct xe_vm *vm; @@ -237,7 +210,7 @@ static int handle_pagefault(struct xe_gt *gt, struct pagefault *pf) goto unlock_vm; } - if (xe_vma_read_only(vma) && pf->access_type != ACCESS_TYPE_READ) { + if (xe_vma_read_only(vma) && pf->access_type != XE_PAGEFAULT_ACCESS_TYPE_READ) { err = -EPERM; goto unlock_vm; } @@ -271,7 +244,7 @@ static int send_pagefault_reply(struct xe_guc *guc, return xe_guc_ct_send(&guc->ct, action, ARRAY_SIZE(action), 0, 0); } -static void print_pagefault(struct xe_device *xe, struct pagefault *pf) +static void print_pagefault(struct xe_device *xe, struct xe_pagefault *pf) { drm_dbg(&xe->drm, "\n\tASID: %d\n" "\tVFID: %d\n" @@ -291,7 +264,7 @@ static void print_pagefault(struct xe_device *xe, struct pagefault *pf) #define PF_MSG_LEN_DW 4 -static bool get_pagefault(struct pf_queue *pf_queue, struct pagefault *pf) +static bool get_pagefault(struct pf_queue *pf_queue, struct xe_pagefault *pf) { const struct xe_guc_pagefault_desc *desc; bool ret = false; @@ -378,7 +351,7 @@ static void pf_queue_work_func(struct work_struct *w) struct xe_gt *gt = pf_queue->gt; struct xe_device *xe = gt_to_xe(gt); struct xe_guc_pagefault_reply reply = {}; - struct pagefault pf = {}; + struct xe_pagefault pf = {}; unsigned long threshold; int ret; diff --git a/drivers/gpu/drm/xe/xe_gt_pagefault.h b/drivers/gpu/drm/xe/xe_gt_pagefault.h index 839c065a5e4c..33616043d17a 100644 --- a/drivers/gpu/drm/xe/xe_gt_pagefault.h +++ b/drivers/gpu/drm/xe/xe_gt_pagefault.h @@ -11,6 +11,34 @@ struct xe_gt; struct xe_guc; +struct xe_pagefault { + u64 page_addr; + u32 asid; + u16 pdata; + u8 vfid; + u8 access_type; + u8 fault_type; + u8 fault_level; + u8 engine_class; + u8 engine_instance; + u8 fault_unsuccessful; + bool prefetch; + bool trva_fault; +}; + +enum xe_pagefault_access_type { + XE_PAGEFAULT_ACCESS_TYPE_READ = 0, + XE_PAGEFAULT_ACCESS_TYPE_WRITE = 1, + XE_PAGEFAULT_ACCESS_TYPE_ATOMIC = 2, + XE_PAGEFAULT_ACCESS_TYPE_RESERVED = 3, +}; + +enum xe_pagefault_type { + XE_PAGEFAULT_TYPE_NOT_PRESENT = 0, + XE_PAGEFAULT_TYPE_WRITE_ACCESS_VIOLATION = 1, + XE_PAGEFAULT_TYPE_ATOMIC_ACCESS_VIOLATION = 2, +}; + int xe_gt_pagefault_init(struct xe_gt *gt); void xe_gt_pagefault_reset(struct xe_gt *gt); int xe_guc_pagefault_handler(struct xe_guc *guc, u32 *msg, u32 len);