Message ID | 20240105112218.351265-4-jacek.lawrynowicz@linux.intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | accel/ivpu fixes for 6.8 | expand |
On 1/5/2024 4:22 AM, Jacek Lawrynowicz wrote: > From: "Wachowski, Karol" <karol.wachowski@intel.com> > > It is common need to be able to see IOVA/physical to VPU addresses Errant double space between "to" and "see" > mappings. Especially when debugging different kind of memory related > issues. Lack of such logs forces user to modify and recompile KMD manually. > > This commit adds those logs under MMU debug mask which can be turned on > dynamically with module param during KMD load. As far as I understand, the preference is to not expose any kind of raw addresses as it is seen as a security issue, and usually the addresses don't have any real value to someone reading logs, etc. I beleive I picked this up from GregKH. However, this commit text suggests there is value, and I see that one needs to be root to enable this which could probably be considered a sufficent gate to avoiding the data getting into the wrong hands. Is it possible to provide more details as a justification for this? Perhaps an example of a past issue where this data was necessary for debug? > > Signed-off-by: Wachowski, Karol <karol.wachowski@intel.com> > Signed-off-by: Jacek Lawrynowicz <jacek.lawrynowicz@linux.intel.com> > --- > drivers/accel/ivpu/ivpu_drv.h | 1 + > drivers/accel/ivpu/ivpu_mmu_context.c | 9 +++++++++ > 2 files changed, 10 insertions(+) > > diff --git a/drivers/accel/ivpu/ivpu_drv.h b/drivers/accel/ivpu/ivpu_drv.h > index ebc4b84f27b2..9b6e336626e3 100644 > --- a/drivers/accel/ivpu/ivpu_drv.h > +++ b/drivers/accel/ivpu/ivpu_drv.h > @@ -56,6 +56,7 @@ > #define IVPU_DBG_JSM BIT(10) > #define IVPU_DBG_KREF BIT(11) > #define IVPU_DBG_RPM BIT(12) > +#define IVPU_DBG_MMU_MAP BIT(13) > > #define ivpu_err(vdev, fmt, ...) \ > drm_err(&(vdev)->drm, "%s(): " fmt, __func__, ##__VA_ARGS__) > diff --git a/drivers/accel/ivpu/ivpu_mmu_context.c b/drivers/accel/ivpu/ivpu_mmu_context.c > index 12a8c09d4547..fe6161299236 100644 > --- a/drivers/accel/ivpu/ivpu_mmu_context.c > +++ b/drivers/accel/ivpu/ivpu_mmu_context.c > @@ -355,6 +355,9 @@ ivpu_mmu_context_map_sgt(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx, > dma_addr_t dma_addr = sg_dma_address(sg) - sg->offset; > size_t size = sg_dma_len(sg) + sg->offset; > > + ivpu_dbg(vdev, MMU_MAP, "Map ctx: %u dma_addr: 0x%llx vpu_addr: 0x%llx size: %lu\n", > + ctx->id, dma_addr, vpu_addr, size); > + > ret = ivpu_mmu_context_map_pages(vdev, ctx, vpu_addr, dma_addr, size, prot); > if (ret) { > ivpu_err(vdev, "Failed to map context pages\n"); > @@ -366,6 +369,7 @@ ivpu_mmu_context_map_sgt(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx, > > /* Ensure page table modifications are flushed from wc buffers to memory */ > wmb(); > + This looks like an unrelated whitespace change (although I see it pairs with the whitespace change below). > mutex_unlock(&ctx->lock); > > ret = ivpu_mmu_invalidate_tlb(vdev, ctx->id); > @@ -388,14 +392,19 @@ ivpu_mmu_context_unmap_sgt(struct ivpu_device *vdev, struct ivpu_mmu_context *ct > mutex_lock(&ctx->lock); > > for_each_sgtable_dma_sg(sgt, sg, i) { > + dma_addr_t dma_addr = sg_dma_address(sg) - sg->offset; > size_t size = sg_dma_len(sg) + sg->offset; > > + ivpu_dbg(vdev, MMU_MAP, "Unmap ctx: %u dma_addr: 0x%llx vpu_addr: 0x%llx size: %lu\n", > + ctx->id, dma_addr, vpu_addr, size); > + > ivpu_mmu_context_unmap_pages(ctx, vpu_addr, size); > vpu_addr += size; > } > > /* Ensure page table modifications are flushed from wc buffers to memory */ > wmb(); > + This looks like an unrelated whitespace change. > mutex_unlock(&ctx->lock); > > ret = ivpu_mmu_invalidate_tlb(vdev, ctx->id);
On 05.01.2024 16:32, Jeffrey Hugo wrote: > On 1/5/2024 4:22 AM, Jacek Lawrynowicz wrote: >> From: "Wachowski, Karol" <karol.wachowski@intel.com> >> >> It is common need to be able to see IOVA/physical to VPU addresses > > Errant double space between "to" and "see" Sure >> mappings. Especially when debugging different kind of memory related >> issues. Lack of such logs forces user to modify and recompile KMD manually. >> >> This commit adds those logs under MMU debug mask which can be turned on >> dynamically with module param during KMD load. > As far as I understand, the preference is to not expose any kind of raw addresses as it is seen as a security issue, and usually the addresses don't have any real value to someone reading logs, etc. I beleive I picked this up from GregKH. > > However, this commit text suggests there is value, and I see that one needs to be root to enable this which could probably be considered a sufficent gate to avoiding the data getting into the wrong hands. > > Is it possible to provide more details as a justification for this? Perhaps an example of a past issue where this data was necessary for debug? There were multiple occasions were these messages were useful: - Debugging IOMMU issues on pre-production hardware - Enabling DevTLB cache on a functional simulator - Debugging performance issues with fragmented buffers There is always some security tradeoff when enabling debug features but in this case it seems to be worth it.
diff --git a/drivers/accel/ivpu/ivpu_drv.h b/drivers/accel/ivpu/ivpu_drv.h index ebc4b84f27b2..9b6e336626e3 100644 --- a/drivers/accel/ivpu/ivpu_drv.h +++ b/drivers/accel/ivpu/ivpu_drv.h @@ -56,6 +56,7 @@ #define IVPU_DBG_JSM BIT(10) #define IVPU_DBG_KREF BIT(11) #define IVPU_DBG_RPM BIT(12) +#define IVPU_DBG_MMU_MAP BIT(13) #define ivpu_err(vdev, fmt, ...) \ drm_err(&(vdev)->drm, "%s(): " fmt, __func__, ##__VA_ARGS__) diff --git a/drivers/accel/ivpu/ivpu_mmu_context.c b/drivers/accel/ivpu/ivpu_mmu_context.c index 12a8c09d4547..fe6161299236 100644 --- a/drivers/accel/ivpu/ivpu_mmu_context.c +++ b/drivers/accel/ivpu/ivpu_mmu_context.c @@ -355,6 +355,9 @@ ivpu_mmu_context_map_sgt(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx, dma_addr_t dma_addr = sg_dma_address(sg) - sg->offset; size_t size = sg_dma_len(sg) + sg->offset; + ivpu_dbg(vdev, MMU_MAP, "Map ctx: %u dma_addr: 0x%llx vpu_addr: 0x%llx size: %lu\n", + ctx->id, dma_addr, vpu_addr, size); + ret = ivpu_mmu_context_map_pages(vdev, ctx, vpu_addr, dma_addr, size, prot); if (ret) { ivpu_err(vdev, "Failed to map context pages\n"); @@ -366,6 +369,7 @@ ivpu_mmu_context_map_sgt(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx, /* Ensure page table modifications are flushed from wc buffers to memory */ wmb(); + mutex_unlock(&ctx->lock); ret = ivpu_mmu_invalidate_tlb(vdev, ctx->id); @@ -388,14 +392,19 @@ ivpu_mmu_context_unmap_sgt(struct ivpu_device *vdev, struct ivpu_mmu_context *ct mutex_lock(&ctx->lock); for_each_sgtable_dma_sg(sgt, sg, i) { + dma_addr_t dma_addr = sg_dma_address(sg) - sg->offset; size_t size = sg_dma_len(sg) + sg->offset; + ivpu_dbg(vdev, MMU_MAP, "Unmap ctx: %u dma_addr: 0x%llx vpu_addr: 0x%llx size: %lu\n", + ctx->id, dma_addr, vpu_addr, size); + ivpu_mmu_context_unmap_pages(ctx, vpu_addr, size); vpu_addr += size; } /* Ensure page table modifications are flushed from wc buffers to memory */ wmb(); + mutex_unlock(&ctx->lock); ret = ivpu_mmu_invalidate_tlb(vdev, ctx->id);