Message ID | 20241021052236.1820329-5-vivek.kasireddy@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | drm/xe/sriov: Don't migrate dmabuf BO to System RAM while running in VM | expand |
Hi Vivek, kernel test robot noticed the following build errors: [auto build test ERROR on drm-xe/drm-xe-next] [also build test ERROR on drm/drm-next drm-exynos/exynos-drm-next drm-intel/for-linux-next drm-intel/for-linux-next-fixes drm-misc/drm-misc-next drm-tip/drm-tip linus/master v6.12-rc4 next-20241021] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch#_base_tree_information] url: https://github.com/intel-lab-lkp/linux/commits/Vivek-Kasireddy/PCI-P2PDMA-Don-t-enforce-ACS-check-for-functions-of-same-device/20241021-134804 base: https://gitlab.freedesktop.org/drm/xe/kernel.git drm-xe-next patch link: https://lore.kernel.org/r/20241021052236.1820329-5-vivek.kasireddy%40intel.com patch subject: [PATCH v2 4/5] drm/xe/bo: Create new dma_addr array for dmabuf BOs associated with VFs config: i386-buildonly-randconfig-003-20241022 (https://download.01.org/0day-ci/archive/20241022/202410221702.FLgKnDgM-lkp@intel.com/config) compiler: gcc-12 (Debian 12.2.0-14) 12.2.0 reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20241022/202410221702.FLgKnDgM-lkp@intel.com/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp@intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202410221702.FLgKnDgM-lkp@intel.com/ All errors (new ones prefixed by >>): drivers/gpu/drm/xe/xe_bo.c: In function 'xe_bo_translate_iova_to_dpa': >> drivers/gpu/drm/xe/xe_bo.c:591:29: error: invalid use of undefined type 'struct drm_pagemap_dma_addr' 591 | bo->dma_addr[i] = drm_pagemap_dma_addr_encode(addr, | ^ >> drivers/gpu/drm/xe/xe_bo.c:591:35: error: implicit declaration of function 'drm_pagemap_dma_addr_encode' [-Werror=implicit-function-declaration] 591 | bo->dma_addr[i] = drm_pagemap_dma_addr_encode(addr, | ^~~~~~~~~~~~~~~~~~~~~~~~~~~ >> drivers/gpu/drm/xe/xe_bo.c:592:49: error: 'DRM_INTERCONNECT_DRIVER' undeclared (first use in this function) 592 | DRM_INTERCONNECT_DRIVER, | ^~~~~~~~~~~~~~~~~~~~~~~ drivers/gpu/drm/xe/xe_bo.c:592:49: note: each undeclared identifier is reported only once for each function it appears in drivers/gpu/drm/xe/xe_bo.c:591:33: error: invalid use of undefined type 'struct drm_pagemap_dma_addr' 591 | bo->dma_addr[i] = drm_pagemap_dma_addr_encode(addr, | ^ In file included from include/linux/percpu.h:5, from arch/x86/include/asm/msr.h:15, from arch/x86/include/asm/tsc.h:10, from arch/x86/include/asm/timex.h:6, from include/linux/timex.h:67, from include/linux/time32.h:13, from include/linux/time.h:60, from include/linux/jiffies.h:10, from include/linux/ktime.h:25, from include/linux/timer.h:6, from include/linux/workqueue.h:9, from include/linux/mm_types.h:19, from include/linux/mmzone.h:22, from include/linux/gfp.h:7, from include/linux/mm.h:7, from include/linux/pagemap.h:8, from include/drm/ttm/ttm_tt.h:30, from drivers/gpu/drm/xe/xe_bo.h:9, from drivers/gpu/drm/xe/xe_bo.c:6: drivers/gpu/drm/xe/xe_bo.c: In function 'xe_bo_sg_to_dma_addr_array': >> drivers/gpu/drm/xe/xe_bo.c:626:55: error: invalid application of 'sizeof' to incomplete type 'struct drm_pagemap_dma_addr' 626 | bo->dma_addr = kmalloc_array(sg->nents, sizeof(*bo->dma_addr), | ^ include/linux/alloc_tag.h:202:16: note: in definition of macro 'alloc_hooks_tag' 202 | typeof(_do_alloc) _res = _do_alloc; \ | ^~~~~~~~~ include/linux/slab.h:925:49: note: in expansion of macro 'alloc_hooks' 925 | #define kmalloc_array(...) alloc_hooks(kmalloc_array_noprof(__VA_ARGS__)) | ^~~~~~~~~~~ drivers/gpu/drm/xe/xe_bo.c:626:24: note: in expansion of macro 'kmalloc_array' 626 | bo->dma_addr = kmalloc_array(sg->nents, sizeof(*bo->dma_addr), | ^~~~~~~~~~~~~ >> drivers/gpu/drm/xe/xe_bo.c:626:55: error: invalid application of 'sizeof' to incomplete type 'struct drm_pagemap_dma_addr' 626 | bo->dma_addr = kmalloc_array(sg->nents, sizeof(*bo->dma_addr), | ^ include/linux/alloc_tag.h:202:34: note: in definition of macro 'alloc_hooks_tag' 202 | typeof(_do_alloc) _res = _do_alloc; \ | ^~~~~~~~~ include/linux/slab.h:925:49: note: in expansion of macro 'alloc_hooks' 925 | #define kmalloc_array(...) alloc_hooks(kmalloc_array_noprof(__VA_ARGS__)) | ^~~~~~~~~~~ drivers/gpu/drm/xe/xe_bo.c:626:24: note: in expansion of macro 'kmalloc_array' 626 | bo->dma_addr = kmalloc_array(sg->nents, sizeof(*bo->dma_addr), | ^~~~~~~~~~~~~ drivers/gpu/drm/xe/xe_bo.c:626:22: warning: assignment to 'struct drm_pagemap_dma_addr *' from 'int' makes pointer from integer without a cast [-Wint-conversion] 626 | bo->dma_addr = kmalloc_array(sg->nents, sizeof(*bo->dma_addr), | ^ cc1: some warnings being treated as errors vim +591 drivers/gpu/drm/xe/xe_bo.c 570 571 572 static void xe_bo_translate_iova_to_dpa(struct iommu_domain *domain, 573 struct xe_bo *bo, struct sg_table *sg, 574 resource_size_t io_start, int vfid) 575 { 576 struct xe_device *xe = xe_bo_device(bo); 577 struct xe_gt *gt = xe_root_mmio_gt(xe); 578 struct scatterlist *sgl; 579 struct xe_bo *lmem_bo; 580 phys_addr_t phys; 581 dma_addr_t addr; 582 u64 offset, i; 583 584 lmem_bo = xe_gt_sriov_pf_config_get_lmem_obj(gt, ++vfid); 585 586 for_each_sgtable_dma_sg(sg, sgl, i) { 587 phys = iommu_iova_to_phys(domain, sg_dma_address(sgl)); 588 offset = phys - io_start; 589 addr = xe_bo_addr(lmem_bo, offset, sg_dma_len(sgl)); 590 > 591 bo->dma_addr[i] = drm_pagemap_dma_addr_encode(addr, > 592 DRM_INTERCONNECT_DRIVER, 593 get_order(sg_dma_len(sgl)), 594 DMA_BIDIRECTIONAL); 595 } 596 } 597 598 static int xe_bo_sg_to_dma_addr_array(struct sg_table *sg, struct xe_bo *bo) 599 { 600 struct xe_device *xe = xe_bo_device(bo); 601 struct iommu_domain *domain; 602 resource_size_t io_start; 603 struct pci_dev *pdev; 604 phys_addr_t phys; 605 int vfid; 606 607 if (!IS_SRIOV_PF(xe)) 608 return 0; 609 610 domain = iommu_get_domain_for_dev(xe->drm.dev); 611 if (!domain) 612 return 0; 613 614 phys = iommu_iova_to_phys(domain, sg_dma_address(sg->sgl)); 615 if (page_is_ram(PFN_DOWN(phys))) 616 return 0; 617 618 pdev = xe_find_vf_dev(xe, phys); 619 if (!pdev) 620 return 0; 621 622 vfid = pci_iov_vf_id(pdev); 623 if (vfid < 0) 624 return 0; 625 > 626 bo->dma_addr = kmalloc_array(sg->nents, sizeof(*bo->dma_addr), 627 GFP_KERNEL); 628 if (!bo->dma_addr) 629 return -ENOMEM; 630 631 bo->is_devmem_external = true; 632 io_start = pci_resource_start(pdev, LMEM_BAR); 633 xe_bo_translate_iova_to_dpa(domain, bo, sg, io_start, vfid); 634 635 return 0; 636 } 637
Hi Vivek, kernel test robot noticed the following build errors: [auto build test ERROR on drm-xe/drm-xe-next] [also build test ERROR on drm/drm-next drm-exynos/exynos-drm-next drm-intel/for-linux-next drm-intel/for-linux-next-fixes drm-misc/drm-misc-next drm-tip/drm-tip linus/master v6.12-rc4 next-20241021] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch#_base_tree_information] url: https://github.com/intel-lab-lkp/linux/commits/Vivek-Kasireddy/PCI-P2PDMA-Don-t-enforce-ACS-check-for-functions-of-same-device/20241021-134804 base: https://gitlab.freedesktop.org/drm/xe/kernel.git drm-xe-next patch link: https://lore.kernel.org/r/20241021052236.1820329-5-vivek.kasireddy%40intel.com patch subject: [PATCH v2 4/5] drm/xe/bo: Create new dma_addr array for dmabuf BOs associated with VFs config: x86_64-allyesconfig (https://download.01.org/0day-ci/archive/20241022/202410221832.R04DR21j-lkp@intel.com/config) compiler: clang version 18.1.8 (https://github.com/llvm/llvm-project 3b5b5c1ec4a3095ab096dd780e84d7ab81f3d7ff) reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20241022/202410221832.R04DR21j-lkp@intel.com/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp@intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202410221832.R04DR21j-lkp@intel.com/ All errors (new ones prefixed by >>): >> drivers/gpu/drm/xe/xe_bo.c:591:15: error: subscript of pointer to incomplete type 'struct drm_pagemap_dma_addr' 591 | bo->dma_addr[i] = drm_pagemap_dma_addr_encode(addr, | ~~~~~~~~~~~~^ drivers/gpu/drm/xe/xe_bo_types.h:78:9: note: forward declaration of 'struct drm_pagemap_dma_addr' 78 | struct drm_pagemap_dma_addr *dma_addr; | ^ >> drivers/gpu/drm/xe/xe_bo.c:591:21: error: call to undeclared function 'drm_pagemap_dma_addr_encode'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] 591 | bo->dma_addr[i] = drm_pagemap_dma_addr_encode(addr, | ^ >> drivers/gpu/drm/xe/xe_bo.c:592:7: error: use of undeclared identifier 'DRM_INTERCONNECT_DRIVER' 592 | DRM_INTERCONNECT_DRIVER, | ^ >> drivers/gpu/drm/xe/xe_bo.c:626:48: error: invalid application of 'sizeof' to an incomplete type 'struct drm_pagemap_dma_addr' 626 | bo->dma_addr = kmalloc_array(sg->nents, sizeof(*bo->dma_addr), | ^~~~~~~~~~~~~~~ include/linux/slab.h:925:63: note: expanded from macro 'kmalloc_array' 925 | #define kmalloc_array(...) alloc_hooks(kmalloc_array_noprof(__VA_ARGS__)) | ^~~~~~~~~~~ include/linux/alloc_tag.h:210:31: note: expanded from macro 'alloc_hooks' 210 | alloc_hooks_tag(&_alloc_tag, _do_alloc); \ | ^~~~~~~~~ include/linux/alloc_tag.h:202:9: note: expanded from macro 'alloc_hooks_tag' 202 | typeof(_do_alloc) _res = _do_alloc; \ | ^~~~~~~~~ drivers/gpu/drm/xe/xe_bo_types.h:78:9: note: forward declaration of 'struct drm_pagemap_dma_addr' 78 | struct drm_pagemap_dma_addr *dma_addr; | ^ >> drivers/gpu/drm/xe/xe_bo.c:626:48: error: invalid application of 'sizeof' to an incomplete type 'struct drm_pagemap_dma_addr' 626 | bo->dma_addr = kmalloc_array(sg->nents, sizeof(*bo->dma_addr), | ^~~~~~~~~~~~~~~ include/linux/slab.h:925:63: note: expanded from macro 'kmalloc_array' 925 | #define kmalloc_array(...) alloc_hooks(kmalloc_array_noprof(__VA_ARGS__)) | ^~~~~~~~~~~ include/linux/alloc_tag.h:210:31: note: expanded from macro 'alloc_hooks' 210 | alloc_hooks_tag(&_alloc_tag, _do_alloc); \ | ^~~~~~~~~ include/linux/alloc_tag.h:202:27: note: expanded from macro 'alloc_hooks_tag' 202 | typeof(_do_alloc) _res = _do_alloc; \ | ^~~~~~~~~ drivers/gpu/drm/xe/xe_bo_types.h:78:9: note: forward declaration of 'struct drm_pagemap_dma_addr' 78 | struct drm_pagemap_dma_addr *dma_addr; | ^ 5 errors generated. Kconfig warnings: (for reference only) WARNING: unmet direct dependencies detected for MODVERSIONS Depends on [n]: MODULES [=y] && !COMPILE_TEST [=y] Selected by [y]: - RANDSTRUCT_FULL [=y] && (CC_HAS_RANDSTRUCT [=y] || GCC_PLUGINS [=n]) && MODULES [=y] vim +591 drivers/gpu/drm/xe/xe_bo.c 570 571 572 static void xe_bo_translate_iova_to_dpa(struct iommu_domain *domain, 573 struct xe_bo *bo, struct sg_table *sg, 574 resource_size_t io_start, int vfid) 575 { 576 struct xe_device *xe = xe_bo_device(bo); 577 struct xe_gt *gt = xe_root_mmio_gt(xe); 578 struct scatterlist *sgl; 579 struct xe_bo *lmem_bo; 580 phys_addr_t phys; 581 dma_addr_t addr; 582 u64 offset, i; 583 584 lmem_bo = xe_gt_sriov_pf_config_get_lmem_obj(gt, ++vfid); 585 586 for_each_sgtable_dma_sg(sg, sgl, i) { 587 phys = iommu_iova_to_phys(domain, sg_dma_address(sgl)); 588 offset = phys - io_start; 589 addr = xe_bo_addr(lmem_bo, offset, sg_dma_len(sgl)); 590 > 591 bo->dma_addr[i] = drm_pagemap_dma_addr_encode(addr, > 592 DRM_INTERCONNECT_DRIVER, 593 get_order(sg_dma_len(sgl)), 594 DMA_BIDIRECTIONAL); 595 } 596 } 597 598 static int xe_bo_sg_to_dma_addr_array(struct sg_table *sg, struct xe_bo *bo) 599 { 600 struct xe_device *xe = xe_bo_device(bo); 601 struct iommu_domain *domain; 602 resource_size_t io_start; 603 struct pci_dev *pdev; 604 phys_addr_t phys; 605 int vfid; 606 607 if (!IS_SRIOV_PF(xe)) 608 return 0; 609 610 domain = iommu_get_domain_for_dev(xe->drm.dev); 611 if (!domain) 612 return 0; 613 614 phys = iommu_iova_to_phys(domain, sg_dma_address(sg->sgl)); 615 if (page_is_ram(PFN_DOWN(phys))) 616 return 0; 617 618 pdev = xe_find_vf_dev(xe, phys); 619 if (!pdev) 620 return 0; 621 622 vfid = pci_iov_vf_id(pdev); 623 if (vfid < 0) 624 return 0; 625 > 626 bo->dma_addr = kmalloc_array(sg->nents, sizeof(*bo->dma_addr), 627 GFP_KERNEL); 628 if (!bo->dma_addr) 629 return -ENOMEM; 630 631 bo->is_devmem_external = true; 632 io_start = pci_resource_start(pdev, LMEM_BAR); 633 xe_bo_translate_iova_to_dpa(domain, bo, sg, io_start, vfid); 634 635 return 0; 636 } 637
diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c index 5b232f2951b1..81a2f8c8031a 100644 --- a/drivers/gpu/drm/xe/xe_bo.c +++ b/drivers/gpu/drm/xe/xe_bo.c @@ -6,6 +6,7 @@ #include "xe_bo.h" #include <linux/dma-buf.h> +#include <linux/iommu.h> #include <drm/drm_drv.h> #include <drm/drm_gem_ttm_helper.h> @@ -15,16 +16,19 @@ #include <drm/ttm/ttm_tt.h> #include <uapi/drm/xe_drm.h> +#include "regs/xe_bars.h" #include "xe_device.h" #include "xe_dma_buf.h" #include "xe_drm_client.h" #include "xe_ggtt.h" #include "xe_gt.h" +#include "xe_gt_sriov_pf_config.h" #include "xe_map.h" #include "xe_migrate.h" #include "xe_pm.h" #include "xe_preempt_fence.h" #include "xe_res_cursor.h" +#include "xe_sriov_pf_helpers.h" #include "xe_trace_bo.h" #include "xe_ttm_stolen_mgr.h" #include "xe_vm.h" @@ -543,6 +547,94 @@ static int xe_bo_trigger_rebind(struct xe_device *xe, struct xe_bo *bo, return ret; } +static struct pci_dev *xe_find_vf_dev(struct xe_device *xe, + phys_addr_t phys) +{ + struct pci_dev *pdev, *pf_pdev = to_pci_dev(xe->drm.dev); + resource_size_t io_start, io_size; + + list_for_each_entry(pdev, &pf_pdev->bus->devices, bus_list) { + if (pdev->is_physfn) + continue; + + io_start = pci_resource_start(pdev, LMEM_BAR); + io_size = pci_resource_len(pdev, LMEM_BAR); + + if (phys >= io_start && + phys < (io_start + io_size - PAGE_SIZE)) + return pdev; + } + + return NULL; +} + + +static void xe_bo_translate_iova_to_dpa(struct iommu_domain *domain, + struct xe_bo *bo, struct sg_table *sg, + resource_size_t io_start, int vfid) +{ + struct xe_device *xe = xe_bo_device(bo); + struct xe_gt *gt = xe_root_mmio_gt(xe); + struct scatterlist *sgl; + struct xe_bo *lmem_bo; + phys_addr_t phys; + dma_addr_t addr; + u64 offset, i; + + lmem_bo = xe_gt_sriov_pf_config_get_lmem_obj(gt, ++vfid); + + for_each_sgtable_dma_sg(sg, sgl, i) { + phys = iommu_iova_to_phys(domain, sg_dma_address(sgl)); + offset = phys - io_start; + addr = xe_bo_addr(lmem_bo, offset, sg_dma_len(sgl)); + + bo->dma_addr[i] = drm_pagemap_dma_addr_encode(addr, + DRM_INTERCONNECT_DRIVER, + get_order(sg_dma_len(sgl)), + DMA_BIDIRECTIONAL); + } +} + +static int xe_bo_sg_to_dma_addr_array(struct sg_table *sg, struct xe_bo *bo) +{ + struct xe_device *xe = xe_bo_device(bo); + struct iommu_domain *domain; + resource_size_t io_start; + struct pci_dev *pdev; + phys_addr_t phys; + int vfid; + + if (!IS_SRIOV_PF(xe)) + return 0; + + domain = iommu_get_domain_for_dev(xe->drm.dev); + if (!domain) + return 0; + + phys = iommu_iova_to_phys(domain, sg_dma_address(sg->sgl)); + if (page_is_ram(PFN_DOWN(phys))) + return 0; + + pdev = xe_find_vf_dev(xe, phys); + if (!pdev) + return 0; + + vfid = pci_iov_vf_id(pdev); + if (vfid < 0) + return 0; + + bo->dma_addr = kmalloc_array(sg->nents, sizeof(*bo->dma_addr), + GFP_KERNEL); + if (!bo->dma_addr) + return -ENOMEM; + + bo->is_devmem_external = true; + io_start = pci_resource_start(pdev, LMEM_BAR); + xe_bo_translate_iova_to_dpa(domain, bo, sg, io_start, vfid); + + return 0; +} + /* * The dma-buf map_attachment() / unmap_attachment() is hooked up here. * Note that unmapping the attachment is deferred to the next @@ -560,12 +652,15 @@ static int xe_bo_move_dmabuf(struct ttm_buffer_object *ttm_bo, ttm); struct xe_device *xe = ttm_to_xe_device(ttm_bo->bdev); struct sg_table *sg; + int ret; xe_assert(xe, attach); xe_assert(xe, ttm_bo->ttm); - if (new_res->mem_type == XE_PL_SYSTEM) - goto out; + if (new_res->mem_type == XE_PL_SYSTEM) { + ttm_bo_move_null(ttm_bo, new_res); + return 0; + } if (ttm_bo->sg) { dma_buf_unmap_attachment(attach, ttm_bo->sg, DMA_BIDIRECTIONAL); @@ -576,13 +671,16 @@ static int xe_bo_move_dmabuf(struct ttm_buffer_object *ttm_bo, if (IS_ERR(sg)) return PTR_ERR(sg); + ret = xe_bo_sg_to_dma_addr_array(sg, ttm_to_xe_bo(ttm_bo)); + if (ret < 0) { + dma_buf_unmap_attachment(attach, sg, DMA_BIDIRECTIONAL); + return ret; + } + ttm_bo->sg = sg; xe_tt->sg = sg; -out: - ttm_bo_move_null(ttm_bo, new_res); - - return 0; + return ret; } /** @@ -1066,6 +1164,8 @@ static void xe_ttm_bo_release_notify(struct ttm_buffer_object *ttm_bo) static void xe_ttm_bo_delete_mem_notify(struct ttm_buffer_object *ttm_bo) { + struct xe_bo *bo = ttm_to_xe_bo(ttm_bo); + if (!xe_bo_is_xe_bo(ttm_bo)) return; @@ -1079,6 +1179,10 @@ static void xe_ttm_bo_delete_mem_notify(struct ttm_buffer_object *ttm_bo) dma_buf_unmap_attachment(ttm_bo->base.import_attach, ttm_bo->sg, DMA_BIDIRECTIONAL); + + if (bo->is_devmem_external) { + kfree(bo->dma_addr); + } ttm_bo->sg = NULL; xe_tt->sg = NULL; } diff --git a/drivers/gpu/drm/xe/xe_bo_types.h b/drivers/gpu/drm/xe/xe_bo_types.h index 13c6d8a69e91..f74876be3f8d 100644 --- a/drivers/gpu/drm/xe/xe_bo_types.h +++ b/drivers/gpu/drm/xe/xe_bo_types.h @@ -66,7 +66,16 @@ struct xe_bo { /** @ccs_cleared */ bool ccs_cleared; - + /** + * @is_devmem_external: Whether this BO is an imported dma-buf that + * is LMEM based. + */ + bool is_devmem_external; + /** + * @dma_addr: An array to store DMA addresses (DPAs) for imported + * dmabuf BOs that are LMEM based. + */ + struct drm_pagemap_dma_addr *dma_addr; /** * @cpu_caching: CPU caching mode. Currently only used for userspace * objects. Exceptions are system memory on DGFX, which is always
For BOs of type ttm_bo_type_sg, that are backed by PCI BAR addresses associated with a VF, we need to adjust and translate these addresses to LMEM addresses to make the BOs usable by the PF. Otherwise, the BOs (i.e, PCI BAR addresses) are only accessible by the CPU and not by the GPU. In order to do the above, we first need to identify if the DMA addresses associated with an imported BO (type ttm_bo_type_sg) belong to System RAM or a VF or other PCI devices. After we confirm that they belong to a VF, we convert the DMA addresses (IOVAs in this case) to DPAs and create a new dma_addr array (of type drm_pagemap_dma_addr) and populate it with the new addresses along with the segment sizes. v2: - Use dma_addr array instead of sg table to store translated addresses (Matt) Cc: Matthew Brost <matthew.brost@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: Vivek Kasireddy <vivek.kasireddy@intel.com> --- drivers/gpu/drm/xe/xe_bo.c | 116 +++++++++++++++++++++++++++++-- drivers/gpu/drm/xe/xe_bo_types.h | 11 ++- 2 files changed, 120 insertions(+), 7 deletions(-)