diff mbox series

[v7,32/32] drm/doc: gpusvm: Add GPU SVM documentation

Message ID 20250306012657.3505757-33-matthew.brost@intel.com (mailing list archive)
State New
Headers show
Series Introduce GPU SVM and Xe SVM implementation | expand

Commit Message

Matthew Brost March 6, 2025, 1:26 a.m. UTC
Add documentation for agree upon GPU SVM design principles, current
status, and future plans.

v4:
 - Address Thomas's feedback
v5:
 - s/Current/Basline (Thomas)
v7:
 - Add license (CI)
 - Add examples for design guideline reasoning (Alistair)
 - Add snippet about possible livelock with concurrent GPU and and CPU
   access (Alistair)

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 Documentation/gpu/rfc/gpusvm.rst | 106 +++++++++++++++++++++++++++++++
 Documentation/gpu/rfc/index.rst  |   4 ++
 2 files changed, 110 insertions(+)
 create mode 100644 Documentation/gpu/rfc/gpusvm.rst

Comments

Alistair Popple March 6, 2025, 5:45 a.m. UTC | #1
On Wed, Mar 05, 2025 at 05:26:57PM -0800, Matthew Brost wrote:
> Add documentation for agree upon GPU SVM design principles, current
> status, and future plans.

One minor nit and a comment below, but feel free to add:

Acked-by: Alistair Popple <apopple@nvidia.com>

> v4:
>  - Address Thomas's feedback
> v5:
>  - s/Current/Basline (Thomas)
> v7:
>  - Add license (CI)
>  - Add examples for design guideline reasoning (Alistair)
>  - Add snippet about possible livelock with concurrent GPU and and CPU
>    access (Alistair)
> 
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> ---
>  Documentation/gpu/rfc/gpusvm.rst | 106 +++++++++++++++++++++++++++++++
>  Documentation/gpu/rfc/index.rst  |   4 ++
>  2 files changed, 110 insertions(+)
>  create mode 100644 Documentation/gpu/rfc/gpusvm.rst
> 
> diff --git a/Documentation/gpu/rfc/gpusvm.rst b/Documentation/gpu/rfc/gpusvm.rst
> new file mode 100644
> index 000000000000..87d9f9506155
> --- /dev/null
> +++ b/Documentation/gpu/rfc/gpusvm.rst
> @@ -0,0 +1,106 @@
> +.. SPDX-License-Identifier: (GPL-2.0+ OR MIT)
> +
> +===============
> +GPU SVM Section
> +===============
> +
> +Agreed upon design principles
> +=============================
> +
> +* migrate_to_ram path
> +	* Rely only on core MM concepts (migration PTEs, page references, and
> +	  page locking).
> +	* No driver specific locks other than locks for hardware interaction in
> +	  this path. These are not required and generally a bad idea to
> +	  invent driver defined locks to seal core MM races.
> +	* An example of a driver-specific lock causing issues occurred before
> +	  fixing do_swap_page to lock the faulting page. A driver-exclusive lock
> +	  in migrate_to_ram produced a stable livelock if enough threads read
> +	  the faulting page.
> +	* Partial migration is supported (i.e., a subset of pages attempting to
> +	  migrate can actually migrate, with only the faulting page guaranteed
> +	  to migrate).
> +	* Driver handles mixed migrations via retry loops rather than locking.
> +* Eviction
> +	* Eviction is defined as migrating data from the GPU back to the
> +	  CPU without a virtual address to free up GPU memory.
> +	* Only looking at physical memory data structures and locks as opposed to
> +	  looking at virtual memory data structures and locks.
> +	* No looking at mm/vma structs or relying on those being locked.
> +	* The rationale for the above two points is that CPU virtual addresses
> +	  can change at any moment, while the physical pages remain stable.
> +	* GPU page table invalidation, which requires a GPU virtual address, is
> +	  handled via the notifier that has access to the GPU virtual address.
> +* GPU fault side
> +	* mmap_read only used around core MM functions which require this lock
> +	  and should strive to take mmap_read lock only in GPU SVM layer.
> +	* Big retry loop to handle all races with the mmu notifier under the gpu
> +	  pagetable locks/mmu notifier range lock/whatever we end up calling
> +          those.
> +	* Races (especially against concurrent eviction or migrate_to_ram)
> +	  should not be handled on the fault side by trying to hold locks;
> +	  rather, they should be handled using retry loops. One possible
> +	  exception is holding a BO's dma-resv lock during the initial migration
> +	  to VRAM, as this is a well-defined lock that can be taken underneath
> +	  the mmap_read lock.
> +	* One possible issue with the above approach is if a driver has a strict
> +	  migration policy requiring GPU access to occur in GPU memory.
> +	  Concurrent CPU access could cause a livelock due to endless retries.
> +	  While no current user (Xe) of GPU SVM has such a policy, it is likely
> +	  to be added in the future. Ideally, this should be resolved on the
> +	  core-MM side rather than through a driver-side lock.
> +* Physical memory to virtual backpointer
> +	* This does not work, as no pointers from physical memory to virtual
> +	  memory should exist. mremap() is an example of the core MM updating
> +	  the virtual address without notifying the driver.

Pretty minor nit, but this could be read as core MM won't send a mmu notifier
when calling mremap(). That's not the case, as it will get a notifier to
invalidate the address but won't explicitly get notified of the new address.

Not worth sending an update just for that though.

> +	* The physical memory backpointer (page->zone_device_data) should remain
> +	  stable from allocation to page free. Safely updating this against a
> +	  concurrent user would be very difficult unless the page is free.
> +* GPU pagetable locking
> +	* Notifier lock only protects range tree, pages valid state for a range
> +	  (rather than seqno due to wider notifiers), pagetable entries, and
> +	  mmu notifier seqno tracking, it is not a global lock to protect
> +          against races.
> +	* All races handled with big retry as mentioned above.
> +
> +Overview of baseline design
> +===========================
> +
> +Baseline design is simple as possible to get a working basline in which can be
> +built upon.
> +
> +.. kernel-doc:: drivers/gpu/drm/xe/drm_gpusvm.c
> +   :doc: Overview
> +   :doc: Locking
> +   :doc: Migrataion
> +   :doc: Partial Unmapping of Ranges
> +   :doc: Examples
> +
> +Possible future design features
> +===============================
> +
> +* Concurrent GPU faults
> +	* CPU faults are concurrent so makes sense to have concurrent GPU
> +	  faults.
> +	* Should be possible with fined grained locking in the driver GPU
> +	  fault handler.
> +	* No expected GPU SVM changes required.
> +* Ranges with mixed system and device pages
> +	* Can be added if required to drm_gpusvm_get_pages fairly easily.
> +* Multi-GPU support
> +	* Work in progress and patches expected after initially landing on GPU
> +	  SVM.
> +	* Ideally can be done with little to no changes to GPU SVM.
> +* Drop ranges in favor of radix tree
> +	* May be desirable for faster notifiers.
> +* Compound device pages
> +	* Nvidia, AMD, and Intel all have agreed expensive core MM functions in
> +	  migrate device layer are a performance bottleneck, having compound
> +	  device pages should help increase performance by reducing the number
> +	  of these expensive calls.

Balbir has also just posted an initial RFC implementation of this here:

https://lore.kernel.org/linux-mm/20250306044239.3874247-1-balbirs@nvidia.com/

> +* Higher order dma mapping for migration
> +	* 4k dma mapping adversely affects migration performance on Intel
> +	  hardware, higher order (2M) dma mapping should help here.
> +* Build common userptr implementation on top of GPU SVM
> +* Driver side madvise implementation and migration policies
> +* Pull in pending dma-mapping API changes from Leon / Nvidia when these land
> diff --git a/Documentation/gpu/rfc/index.rst b/Documentation/gpu/rfc/index.rst
> index 476719771eef..396e535377fb 100644
> --- a/Documentation/gpu/rfc/index.rst
> +++ b/Documentation/gpu/rfc/index.rst
> @@ -16,6 +16,10 @@ host such documentation:
>  * Once the code has landed move all the documentation to the right places in
>    the main core, helper or driver sections.
>  
> +.. toctree::
> +
> +    gpusvm.rst
> +
>  .. toctree::
>  
>      i915_gem_lmem.rst
> -- 
> 2.34.1
>
Matthew Brost March 6, 2025, 6:08 a.m. UTC | #2
On Thu, Mar 06, 2025 at 04:45:31PM +1100, Alistair Popple wrote:
> On Wed, Mar 05, 2025 at 05:26:57PM -0800, Matthew Brost wrote:
> > Add documentation for agree upon GPU SVM design principles, current
> > status, and future plans.
> 
> One minor nit and a comment below, but feel free to add:
> 
> Acked-by: Alistair Popple <apopple@nvidia.com>
> 

Thanks!

> > v4:
> >  - Address Thomas's feedback
> > v5:
> >  - s/Current/Basline (Thomas)
> > v7:
> >  - Add license (CI)
> >  - Add examples for design guideline reasoning (Alistair)
> >  - Add snippet about possible livelock with concurrent GPU and and CPU
> >    access (Alistair)
> > 
> > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > ---
> >  Documentation/gpu/rfc/gpusvm.rst | 106 +++++++++++++++++++++++++++++++
> >  Documentation/gpu/rfc/index.rst  |   4 ++
> >  2 files changed, 110 insertions(+)
> >  create mode 100644 Documentation/gpu/rfc/gpusvm.rst
> > 
> > diff --git a/Documentation/gpu/rfc/gpusvm.rst b/Documentation/gpu/rfc/gpusvm.rst
> > new file mode 100644
> > index 000000000000..87d9f9506155
> > --- /dev/null
> > +++ b/Documentation/gpu/rfc/gpusvm.rst
> > @@ -0,0 +1,106 @@
> > +.. SPDX-License-Identifier: (GPL-2.0+ OR MIT)
> > +
> > +===============
> > +GPU SVM Section
> > +===============
> > +
> > +Agreed upon design principles
> > +=============================
> > +
> > +* migrate_to_ram path
> > +	* Rely only on core MM concepts (migration PTEs, page references, and
> > +	  page locking).
> > +	* No driver specific locks other than locks for hardware interaction in
> > +	  this path. These are not required and generally a bad idea to
> > +	  invent driver defined locks to seal core MM races.
> > +	* An example of a driver-specific lock causing issues occurred before
> > +	  fixing do_swap_page to lock the faulting page. A driver-exclusive lock
> > +	  in migrate_to_ram produced a stable livelock if enough threads read
> > +	  the faulting page.
> > +	* Partial migration is supported (i.e., a subset of pages attempting to
> > +	  migrate can actually migrate, with only the faulting page guaranteed
> > +	  to migrate).
> > +	* Driver handles mixed migrations via retry loops rather than locking.
> > +* Eviction
> > +	* Eviction is defined as migrating data from the GPU back to the
> > +	  CPU without a virtual address to free up GPU memory.
> > +	* Only looking at physical memory data structures and locks as opposed to
> > +	  looking at virtual memory data structures and locks.
> > +	* No looking at mm/vma structs or relying on those being locked.
> > +	* The rationale for the above two points is that CPU virtual addresses
> > +	  can change at any moment, while the physical pages remain stable.
> > +	* GPU page table invalidation, which requires a GPU virtual address, is
> > +	  handled via the notifier that has access to the GPU virtual address.
> > +* GPU fault side
> > +	* mmap_read only used around core MM functions which require this lock
> > +	  and should strive to take mmap_read lock only in GPU SVM layer.
> > +	* Big retry loop to handle all races with the mmu notifier under the gpu
> > +	  pagetable locks/mmu notifier range lock/whatever we end up calling
> > +          those.
> > +	* Races (especially against concurrent eviction or migrate_to_ram)
> > +	  should not be handled on the fault side by trying to hold locks;
> > +	  rather, they should be handled using retry loops. One possible
> > +	  exception is holding a BO's dma-resv lock during the initial migration
> > +	  to VRAM, as this is a well-defined lock that can be taken underneath
> > +	  the mmap_read lock.
> > +	* One possible issue with the above approach is if a driver has a strict
> > +	  migration policy requiring GPU access to occur in GPU memory.
> > +	  Concurrent CPU access could cause a livelock due to endless retries.
> > +	  While no current user (Xe) of GPU SVM has such a policy, it is likely
> > +	  to be added in the future. Ideally, this should be resolved on the
> > +	  core-MM side rather than through a driver-side lock.
> > +* Physical memory to virtual backpointer
> > +	* This does not work, as no pointers from physical memory to virtual
> > +	  memory should exist. mremap() is an example of the core MM updating
> > +	  the virtual address without notifying the driver.
> 
> Pretty minor nit, but this could be read as core MM won't send a mmu notifier
> when calling mremap(). That's not the case, as it will get a notifier to
> invalidate the address but won't explicitly get notified of the new address.
> 
> Not worth sending an update just for that though.
> 

Yep, will fix up when merging.

> > +	* The physical memory backpointer (page->zone_device_data) should remain
> > +	  stable from allocation to page free. Safely updating this against a
> > +	  concurrent user would be very difficult unless the page is free.
> > +* GPU pagetable locking
> > +	* Notifier lock only protects range tree, pages valid state for a range
> > +	  (rather than seqno due to wider notifiers), pagetable entries, and
> > +	  mmu notifier seqno tracking, it is not a global lock to protect
> > +          against races.
> > +	* All races handled with big retry as mentioned above.
> > +
> > +Overview of baseline design
> > +===========================
> > +
> > +Baseline design is simple as possible to get a working basline in which can be
> > +built upon.
> > +
> > +.. kernel-doc:: drivers/gpu/drm/xe/drm_gpusvm.c
> > +   :doc: Overview
> > +   :doc: Locking
> > +   :doc: Migrataion
> > +   :doc: Partial Unmapping of Ranges
> > +   :doc: Examples
> > +
> > +Possible future design features
> > +===============================
> > +
> > +* Concurrent GPU faults
> > +	* CPU faults are concurrent so makes sense to have concurrent GPU
> > +	  faults.
> > +	* Should be possible with fined grained locking in the driver GPU
> > +	  fault handler.
> > +	* No expected GPU SVM changes required.
> > +* Ranges with mixed system and device pages
> > +	* Can be added if required to drm_gpusvm_get_pages fairly easily.
> > +* Multi-GPU support
> > +	* Work in progress and patches expected after initially landing on GPU
> > +	  SVM.
> > +	* Ideally can be done with little to no changes to GPU SVM.
> > +* Drop ranges in favor of radix tree
> > +	* May be desirable for faster notifiers.
> > +* Compound device pages
> > +	* Nvidia, AMD, and Intel all have agreed expensive core MM functions in
> > +	  migrate device layer are a performance bottleneck, having compound
> > +	  device pages should help increase performance by reducing the number
> > +	  of these expensive calls.
> 
> Balbir has also just posted an initial RFC implementation of this here:
> 
> https://lore.kernel.org/linux-mm/20250306044239.3874247-1-balbirs@nvidia.com/
> 

Nice. Will take a look as we have time. I also want to pull in Leon's
series [1] too.

Matt

[1] https://lore.kernel.org/all/cover.1738765879.git.leonro@nvidia.com/

> > +* Higher order dma mapping for migration
> > +	* 4k dma mapping adversely affects migration performance on Intel
> > +	  hardware, higher order (2M) dma mapping should help here.
> > +* Build common userptr implementation on top of GPU SVM
> > +* Driver side madvise implementation and migration policies
> > +* Pull in pending dma-mapping API changes from Leon / Nvidia when these land
> > diff --git a/Documentation/gpu/rfc/index.rst b/Documentation/gpu/rfc/index.rst
> > index 476719771eef..396e535377fb 100644
> > --- a/Documentation/gpu/rfc/index.rst
> > +++ b/Documentation/gpu/rfc/index.rst
> > @@ -16,6 +16,10 @@ host such documentation:
> >  * Once the code has landed move all the documentation to the right places in
> >    the main core, helper or driver sections.
> >  
> > +.. toctree::
> > +
> > +    gpusvm.rst
> > +
> >  .. toctree::
> >  
> >      i915_gem_lmem.rst
> > -- 
> > 2.34.1
> >
diff mbox series

Patch

diff --git a/Documentation/gpu/rfc/gpusvm.rst b/Documentation/gpu/rfc/gpusvm.rst
new file mode 100644
index 000000000000..87d9f9506155
--- /dev/null
+++ b/Documentation/gpu/rfc/gpusvm.rst
@@ -0,0 +1,106 @@ 
+.. SPDX-License-Identifier: (GPL-2.0+ OR MIT)
+
+===============
+GPU SVM Section
+===============
+
+Agreed upon design principles
+=============================
+
+* migrate_to_ram path
+	* Rely only on core MM concepts (migration PTEs, page references, and
+	  page locking).
+	* No driver specific locks other than locks for hardware interaction in
+	  this path. These are not required and generally a bad idea to
+	  invent driver defined locks to seal core MM races.
+	* An example of a driver-specific lock causing issues occurred before
+	  fixing do_swap_page to lock the faulting page. A driver-exclusive lock
+	  in migrate_to_ram produced a stable livelock if enough threads read
+	  the faulting page.
+	* Partial migration is supported (i.e., a subset of pages attempting to
+	  migrate can actually migrate, with only the faulting page guaranteed
+	  to migrate).
+	* Driver handles mixed migrations via retry loops rather than locking.
+* Eviction
+	* Eviction is defined as migrating data from the GPU back to the
+	  CPU without a virtual address to free up GPU memory.
+	* Only looking at physical memory data structures and locks as opposed to
+	  looking at virtual memory data structures and locks.
+	* No looking at mm/vma structs or relying on those being locked.
+	* The rationale for the above two points is that CPU virtual addresses
+	  can change at any moment, while the physical pages remain stable.
+	* GPU page table invalidation, which requires a GPU virtual address, is
+	  handled via the notifier that has access to the GPU virtual address.
+* GPU fault side
+	* mmap_read only used around core MM functions which require this lock
+	  and should strive to take mmap_read lock only in GPU SVM layer.
+	* Big retry loop to handle all races with the mmu notifier under the gpu
+	  pagetable locks/mmu notifier range lock/whatever we end up calling
+          those.
+	* Races (especially against concurrent eviction or migrate_to_ram)
+	  should not be handled on the fault side by trying to hold locks;
+	  rather, they should be handled using retry loops. One possible
+	  exception is holding a BO's dma-resv lock during the initial migration
+	  to VRAM, as this is a well-defined lock that can be taken underneath
+	  the mmap_read lock.
+	* One possible issue with the above approach is if a driver has a strict
+	  migration policy requiring GPU access to occur in GPU memory.
+	  Concurrent CPU access could cause a livelock due to endless retries.
+	  While no current user (Xe) of GPU SVM has such a policy, it is likely
+	  to be added in the future. Ideally, this should be resolved on the
+	  core-MM side rather than through a driver-side lock.
+* Physical memory to virtual backpointer
+	* This does not work, as no pointers from physical memory to virtual
+	  memory should exist. mremap() is an example of the core MM updating
+	  the virtual address without notifying the driver.
+	* The physical memory backpointer (page->zone_device_data) should remain
+	  stable from allocation to page free. Safely updating this against a
+	  concurrent user would be very difficult unless the page is free.
+* GPU pagetable locking
+	* Notifier lock only protects range tree, pages valid state for a range
+	  (rather than seqno due to wider notifiers), pagetable entries, and
+	  mmu notifier seqno tracking, it is not a global lock to protect
+          against races.
+	* All races handled with big retry as mentioned above.
+
+Overview of baseline design
+===========================
+
+Baseline design is simple as possible to get a working basline in which can be
+built upon.
+
+.. kernel-doc:: drivers/gpu/drm/xe/drm_gpusvm.c
+   :doc: Overview
+   :doc: Locking
+   :doc: Migrataion
+   :doc: Partial Unmapping of Ranges
+   :doc: Examples
+
+Possible future design features
+===============================
+
+* Concurrent GPU faults
+	* CPU faults are concurrent so makes sense to have concurrent GPU
+	  faults.
+	* Should be possible with fined grained locking in the driver GPU
+	  fault handler.
+	* No expected GPU SVM changes required.
+* Ranges with mixed system and device pages
+	* Can be added if required to drm_gpusvm_get_pages fairly easily.
+* Multi-GPU support
+	* Work in progress and patches expected after initially landing on GPU
+	  SVM.
+	* Ideally can be done with little to no changes to GPU SVM.
+* Drop ranges in favor of radix tree
+	* May be desirable for faster notifiers.
+* Compound device pages
+	* Nvidia, AMD, and Intel all have agreed expensive core MM functions in
+	  migrate device layer are a performance bottleneck, having compound
+	  device pages should help increase performance by reducing the number
+	  of these expensive calls.
+* Higher order dma mapping for migration
+	* 4k dma mapping adversely affects migration performance on Intel
+	  hardware, higher order (2M) dma mapping should help here.
+* Build common userptr implementation on top of GPU SVM
+* Driver side madvise implementation and migration policies
+* Pull in pending dma-mapping API changes from Leon / Nvidia when these land
diff --git a/Documentation/gpu/rfc/index.rst b/Documentation/gpu/rfc/index.rst
index 476719771eef..396e535377fb 100644
--- a/Documentation/gpu/rfc/index.rst
+++ b/Documentation/gpu/rfc/index.rst
@@ -16,6 +16,10 @@  host such documentation:
 * Once the code has landed move all the documentation to the right places in
   the main core, helper or driver sections.
 
+.. toctree::
+
+    gpusvm.rst
+
 .. toctree::
 
     i915_gem_lmem.rst