From patchwork Thu Mar 20 17:29:59 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 14024198 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C599CC36001 for ; Thu, 20 Mar 2025 17:32:22 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 339F610E67A; Thu, 20 Mar 2025 17:32:20 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="G8hK/V/d"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) by gabe.freedesktop.org (Postfix) with ESMTPS id 2976F10E677; Thu, 20 Mar 2025 17:32:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1742491938; x=1774027938; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=sD4BYO79KzQu+1sw06ayaCgf+I0MpUYej8iUcO55MIk=; b=G8hK/V/d69B6Xmm3z8XX5y71bR0p9roPW7xwOqTi1cPeS2vQnC9iP31M zcJG2VRk8v0pSPAFgdXYO2d3K3sBh0vhj0g5Btx+CeDiN9RVEdfKHFd// FhCwJbfAhtdsyhRPnJGBzLnXCmzW/2/Kozde+xscbe1w7E/HpWgcwXe50 RTOOYgrwmASZSYtE5Ot84YB4VAXRBJCgPvewWG6r3SbSkngfotjsS2lRW AOiO2pHCluBUmjqUc2ZikXjZQ42zWyBZXJvqDOEVz80vMpoAGjQuIblII 0gisgXitAgiMHJnWyJFHRqhIKyha8gD7Zu2utqm48l8KMsAuR6y3DJpNL A==; X-CSE-ConnectionGUID: qroqF9orR1SJmODO5e94TQ== X-CSE-MsgGUID: 2IvbY9OOQTqnadHBAYz4Tw== X-IronPort-AV: E=McAfee;i="6700,10204,11379"; a="69095596" X-IronPort-AV: E=Sophos;i="6.14,262,1736841600"; d="scan'208";a="69095596" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Mar 2025 10:32:18 -0700 X-CSE-ConnectionGUID: qXDZe59uT8ebR0ecJ9/Ltg== X-CSE-MsgGUID: ZgCCdkuzQKqo+Fr79y0NAA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.14,262,1736841600"; d="scan'208";a="123168417" Received: from oandoniu-mobl3.ger.corp.intel.com (HELO mwauld-desk.intel.com) ([10.245.244.226]) by fmviesa007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Mar 2025 10:32:16 -0700 From: Matthew Auld To: intel-xe@lists.freedesktop.org Cc: dri-devel@lists.freedesktop.org, =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Matthew Brost Subject: [PATCH 2/7] drm/gpusvm: use more selective dma dir in get_pages() Date: Thu, 20 Mar 2025 17:29:59 +0000 Message-ID: <20250320172956.168358-11-matthew.auld@intel.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250320172956.168358-9-matthew.auld@intel.com> References: <20250320172956.168358-9-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" If we are only reading the memory then from the device pov the direction can be DMA_TO_DEVICE. This aligns with the xe-userptr code. Using the most restrictive data direction to represent the access is normally a good idea. Signed-off-by: Matthew Auld Cc: Thomas Hellström Cc: Matthew Brost Reviewed-by: Thomas Hellström Reviewed-by: Matthew Brost --- drivers/gpu/drm/drm_gpusvm.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/drm_gpusvm.c b/drivers/gpu/drm/drm_gpusvm.c index 48993cef4a74..7f1cf5492bba 100644 --- a/drivers/gpu/drm/drm_gpusvm.c +++ b/drivers/gpu/drm/drm_gpusvm.c @@ -1355,6 +1355,8 @@ int drm_gpusvm_range_get_pages(struct drm_gpusvm *gpusvm, int err = 0; struct dev_pagemap *pagemap; struct drm_pagemap *dpagemap; + enum dma_data_direction dma_dir = ctx->read_only ? DMA_TO_DEVICE : + DMA_BIDIRECTIONAL; retry: hmm_range.notifier_seq = mmu_interval_read_begin(notifier); @@ -1459,7 +1461,7 @@ int drm_gpusvm_range_get_pages(struct drm_gpusvm *gpusvm, dpagemap->ops->device_map(dpagemap, gpusvm->drm->dev, page, order, - DMA_BIDIRECTIONAL); + dma_dir); if (dma_mapping_error(gpusvm->drm->dev, range->dma_addr[j].addr)) { err = -EFAULT; @@ -1478,7 +1480,7 @@ int drm_gpusvm_range_get_pages(struct drm_gpusvm *gpusvm, addr = dma_map_page(gpusvm->drm->dev, page, 0, PAGE_SIZE << order, - DMA_BIDIRECTIONAL); + dma_dir); if (dma_mapping_error(gpusvm->drm->dev, addr)) { err = -EFAULT; goto err_unmap; @@ -1486,7 +1488,7 @@ int drm_gpusvm_range_get_pages(struct drm_gpusvm *gpusvm, range->dma_addr[j] = drm_pagemap_device_addr_encode (addr, DRM_INTERCONNECT_SYSTEM, order, - DMA_BIDIRECTIONAL); + dma_dir); } i += 1 << order; num_dma_mapped = i;