diff mbox series

[v2,29/29] drm/doc: gpusvm: Add GPU SVM documentation

Message ID 20241016032518.539495-30-matthew.brost@intel.com (mailing list archive)
State New, archived
Headers show
Series Introduce GPU SVM and Xe SVM implementation | expand

Commit Message

Matthew Brost Oct. 16, 2024, 3:25 a.m. UTC
Add documentation for agree upon GPU SVM design principles, current
status, and future plans.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
 Documentation/gpu/rfc/gpusvm.rst | 70 ++++++++++++++++++++++++++++++++
 Documentation/gpu/rfc/index.rst  |  4 ++
 2 files changed, 74 insertions(+)
 create mode 100644 Documentation/gpu/rfc/gpusvm.rst
diff mbox series

Patch

diff --git a/Documentation/gpu/rfc/gpusvm.rst b/Documentation/gpu/rfc/gpusvm.rst
new file mode 100644
index 000000000000..2d3f79a6c30a
--- /dev/null
+++ b/Documentation/gpu/rfc/gpusvm.rst
@@ -0,0 +1,70 @@ 
+===============
+GPU SVM Section
+===============
+
+Agreed upon design principles
+=============================
+
+* migrate_to_ram path
+	* Rely on core MM concepts (migration ptes, page refs, and page locking)
+	  only
+	* No driver specific locks other than locks for hardware interaction in
+	  this path
+	* Partial migration is supported
+	* Driver handles mixed migrations via retry loops rather than locking
+* Eviction
+	* Only looking at physical memory datastructures and locks
+	* No looking at mm/vma structs or relying on those being locked
+* GPU fault side
+	* mmap_read only used around core MM functions which require this lock
+	* Big retry loop to handle all races with the mmu notifier under the gpu
+	  pagetable locks/mmu notifier range lock/whatever we end up calling
+          those
+	* Races (especially against concurrent eviction/migrate_to_ram) should
+	  not be handled on the fault side by trying to hold locks
+* Physical memory to virtual backpointer
+	* Does not work, no pointers from physical memory to virtual should
+	  exist
+* GPU pagetable locking
+	* Notifier lock only protects range tree, pages, pagetable entries, and
+	  mmu notifier seqno tracking, it is not a global lock to protect
+          against races
+	* All races handled with big retry as mentioned above
+
+Overview of current design
+==========================
+
+Current design is simple as possible to get a working basline in which can built
+upon.
+
+.. kernel-doc:: drivers/gpu/drm/xe/drm_gpusvm.c
+   :doc: Overview
+   :doc: Locking
+   :doc: Migrataion
+   :doc: Partial Unmapping of Ranges
+   :doc: Examples
+
+Possible future design features
+===============================
+
+* Concurrent GPU faults
+	* CPU faults are concurrent so makes sense to have concurrent GPU faults
+	* Should be possible with fined grained locking in the driver GPU
+	  fault handler
+	* No expected GPU SVM changes required
+* Ranges with mixed system and device pages
+	* Can be added if required to drm_gpusvm_get_pages fairly easily
+* Multi-GPU support
+	* Work in progress and patches expected after initially landing on GPU
+	  SVM
+	* Ideally can be done with little to no changes to GPU SVM
+* Drop ranges in favor of radix tree
+	* May be desirable for faster notifiers
+* Compound device pages
+	* Nvidia, AMD, and Intel all have agreed expensive core MM functions in
+	  migrate device layer are a performance bottleneck, having compound
+	  device pages should help increase performance by reducing the number
+	  of these expensive calls
+* Higher order dma mapping for migration
+	* 4k dma mapping adversely affects migration performance on Intel
+	  hardware, higher order (2M) dma mapping should help here
diff --git a/Documentation/gpu/rfc/index.rst b/Documentation/gpu/rfc/index.rst
index 476719771eef..396e535377fb 100644
--- a/Documentation/gpu/rfc/index.rst
+++ b/Documentation/gpu/rfc/index.rst
@@ -16,6 +16,10 @@  host such documentation:
 * Once the code has landed move all the documentation to the right places in
   the main core, helper or driver sections.
 
+.. toctree::
+
+    gpusvm.rst
+
 .. toctree::
 
     i915_gem_lmem.rst