@@ -35,6 +35,17 @@
/*
* Memory regions for data placement.
+ *
+ * Buffers placed in TTM_PL_SYSTEM are considered under TTMs control and can
+ * be swapped out whenever TTMs thinks it is a good idea.
+ * In cases where drivers would like to use TTM_PL_SYSTEM as a valid
+ * placement they need to be able to handle the issues that arise due to the
+ * above manually.
+ *
+ * For BO's which reside in system memory but for which the accelerator
+ * requires direct access (i.e. their usage needs to be synchronized
+ * between the CPU and accelerator via fences) a new, driver private
+ * placement that can handle such scenarios is a good idea.
*/
#define TTM_PL_SYSTEM 0
TTM was designed to allow TTM_PL_SYSTEM buffer to be fenced but over the years the code that was meant to handle it was broken and new changes can not deal with buffers which have been placed in TTM_PL_SYSTEM buf do not remain idle. CPU buffers which need to be fenced and shared with accelerators should be placed in driver specific placements that can explicitly handle CPU/accelerator buffer fencing. Currently, apart, from things silently failing nothing is enforcing that requirement which means that it's easy for drivers and new developers to get this wrong. To avoid the confusion we can document this requirement and clarify the solution. This came up during a discussion on dri-devel: https://lore.kernel.org/dri-devel/232f45e9-8748-1243-09bf-56763e6668b3@amd.com Signed-off-by: Zack Rusin <zackr@vmware.com> Cc: Christian König <christian.koenig@amd.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> --- include/drm/ttm/ttm_placement.h | 11 +++++++++++ 1 file changed, 11 insertions(+)