diff mbox series

[v9,6/8] drm/i915/ttm: move shrinker management into adjust_lru

Message ID 20211018091055.1998191-6-matthew.auld@intel.com (mailing list archive)
State New, archived
Headers show
Series [v9,1/8] drm/i915/gem: Break out some shmem backend utils | expand

Commit Message

Matthew Auld Oct. 18, 2021, 9:10 a.m. UTC
We currently just evict lmem objects to system memory when under memory
pressure. For this case we might lack the usual object mm.pages, which
effectively hides the pages from the i915-gem shrinker, until we
actually "attach" the TT to the object, or in the case of lmem-only
objects it just gets migrated back to lmem when touched again.

For all cases we can just adjust the i915 shrinker LRU each time we also
adjust the TTM LRU. The two cases we care about are:

  1) When something is moved by TTM, including when initially populating
     an object. Importantly this covers the case where TTM moves something from
     lmem <-> smem, outside of the normal get_pages() interface, which
     should still ensure the shmem pages underneath are reclaimable.

  2) When calling into i915_gem_object_unlock(). The unlock should
     ensure the object is removed from the shinker LRU, if it was indeed
     swapped out, or just purged, when the shrinker drops the object lock.

v2(Thomas):
  - Handle managing the shrinker LRU in adjust_lru, where it is always
    safe to touch the object.
v3(Thomas):
  - Pretty much a re-write. This time piggy back off the shrink_pin
    stuff, which actually seems to fit quite well for what we want here.
v4(Thomas):
  - Just use a simple boolean for tracking ttm_shrinkable.
v5:
  - Ensure we call adjust_lru when faulting the object, to ensure the
    pages are visible to the shrinker, if needed.
  - Add back the adjust_lru when in i915_ttm_move (Thomas)
v6(Reported-by: kernel test robot <lkp@intel.com>):
  - Remove unused i915_tt

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> #v4
---
 drivers/gpu/drm/i915/gem/i915_gem_object.h    |  8 ++
 .../gpu/drm/i915/gem/i915_gem_object_types.h  | 14 ++-
 drivers/gpu/drm/i915/gem/i915_gem_pages.c     |  5 +-
 drivers/gpu/drm/i915/gem/i915_gem_shrinker.c  | 45 ++++++++--
 drivers/gpu/drm/i915/gem/i915_gem_ttm.c       | 87 ++++++++++++++++---
 5 files changed, 137 insertions(+), 22 deletions(-)

Comments

Thomas Hellstrom Oct. 20, 2021, 2:32 p.m. UTC | #1
On Mon, 2021-10-18 at 10:10 +0100, Matthew Auld wrote:
> We currently just evict lmem objects to system memory when under
> memory
> pressure. For this case we might lack the usual object mm.pages,
> which
> effectively hides the pages from the i915-gem shrinker, until we
> actually "attach" the TT to the object, or in the case of lmem-only
> objects it just gets migrated back to lmem when touched again.
> 
> For all cases we can just adjust the i915 shrinker LRU each time we
> also
> adjust the TTM LRU. The two cases we care about are:
> 
>   1) When something is moved by TTM, including when initially
> populating
>      an object. Importantly this covers the case where TTM moves
> something from
>      lmem <-> smem, outside of the normal get_pages() interface,
> which
>      should still ensure the shmem pages underneath are reclaimable.
> 
>   2) When calling into i915_gem_object_unlock(). The unlock should
>      ensure the object is removed from the shinker LRU, if it was
> indeed
>      swapped out, or just purged, when the shrinker drops the object
> lock.
> 
> v2(Thomas):
>   - Handle managing the shrinker LRU in adjust_lru, where it is
> always
>     safe to touch the object.
> v3(Thomas):
>   - Pretty much a re-write. This time piggy back off the shrink_pin
>     stuff, which actually seems to fit quite well for what we want
> here.
> v4(Thomas):
>   - Just use a simple boolean for tracking ttm_shrinkable.
> v5:
>   - Ensure we call adjust_lru when faulting the object, to ensure the
>     pages are visible to the shrinker, if needed.
>   - Add back the adjust_lru when in i915_ttm_move (Thomas)
> v6(Reported-by: kernel test robot <lkp@intel.com>):
>   - Remove unused i915_tt
> 
> Signed-off-by: Matthew Auld <matthew.auld@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> #v4

R-B also for v6. Note that this may clash with a recent patch by Jason
Gunthorpe that removes the last argument of ttm_bo_vm_fault_reserved().

/Thomas


> ---
>  drivers/gpu/drm/i915/gem/i915_gem_object.h    |  8 ++
>  .../gpu/drm/i915/gem/i915_gem_object_types.h  | 14 ++-
>  drivers/gpu/drm/i915/gem/i915_gem_pages.c     |  5 +-
>  drivers/gpu/drm/i915/gem/i915_gem_shrinker.c  | 45 ++++++++--
>  drivers/gpu/drm/i915/gem/i915_gem_ttm.c       | 87 ++++++++++++++++-
> --
>  5 files changed, 137 insertions(+), 22 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h
> b/drivers/gpu/drm/i915/gem/i915_gem_object.h
> index e641db297e0e..3eac8cf2ae10 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
> @@ -294,6 +294,12 @@ i915_gem_object_is_shrinkable(const struct
> drm_i915_gem_object *obj)
>         return i915_gem_object_type_has(obj,
> I915_GEM_OBJECT_IS_SHRINKABLE);
>  }
>  
> +static inline bool
> +i915_gem_object_has_self_managed_shrink_list(const struct
> drm_i915_gem_object *obj)
> +{
> +       return i915_gem_object_type_has(obj,
> I915_GEM_OBJECT_SELF_MANAGED_SHRINK_LIST);
> +}
> +
>  static inline bool
>  i915_gem_object_is_proxy(const struct drm_i915_gem_object *obj)
>  {
> @@ -531,6 +537,8 @@ i915_gem_object_pin_to_display_plane(struct
> drm_i915_gem_object *obj,
>  
>  void i915_gem_object_make_unshrinkable(struct drm_i915_gem_object
> *obj);
>  void i915_gem_object_make_shrinkable(struct drm_i915_gem_object
> *obj);
> +void __i915_gem_object_make_shrinkable(struct drm_i915_gem_object
> *obj);
> +void __i915_gem_object_make_purgeable(struct drm_i915_gem_object
> *obj);
>  void i915_gem_object_make_purgeable(struct drm_i915_gem_object
> *obj);
>  
>  static inline bool cpu_write_needs_clflush(struct
> drm_i915_gem_object *obj)
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
> b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
> index f4233c4e8d2e..5718a09f5533 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
> @@ -34,9 +34,11 @@ struct i915_lut_handle {
>  
>  struct drm_i915_gem_object_ops {
>         unsigned int flags;
> -#define I915_GEM_OBJECT_IS_SHRINKABLE  BIT(1)
> -#define I915_GEM_OBJECT_IS_PROXY       BIT(2)
> -#define I915_GEM_OBJECT_NO_MMAP                BIT(3)
> +#define I915_GEM_OBJECT_IS_SHRINKABLE                  BIT(1)
> +/* Skip the shrinker management in set_pages/unset_pages */
> +#define I915_GEM_OBJECT_SELF_MANAGED_SHRINK_LIST       BIT(2)
> +#define I915_GEM_OBJECT_IS_PROXY                       BIT(3)
> +#define
> I915_GEM_OBJECT_NO_MMAP                                BIT(4)
>  
>         /* Interface between the GEM object and its backing storage.
>          * get_pages() is called once prior to the use of the
> associated set
> @@ -485,6 +487,12 @@ struct drm_i915_gem_object {
>                  */
>                 atomic_t shrink_pin;
>  
> +               /**
> +                * @ttm_shrinkable: True when the object is using
> shmem pages
> +                * underneath. Protected by the object lock.
> +                */
> +               bool ttm_shrinkable;
> +
>                 /**
>                  * Priority list of potential placements for this
> object.
>                  */
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c
> b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
> index ea6d9b3d2d6b..308e22a80af4 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
> @@ -68,7 +68,7 @@ void __i915_gem_object_set_pages(struct
> drm_i915_gem_object *obj,
>                 shrinkable = false;
>         }
>  
> -       if (shrinkable) {
> +       if (shrinkable &&
> !i915_gem_object_has_self_managed_shrink_list(obj)) {
>                 struct list_head *list;
>                 unsigned long flags;
>  
> @@ -210,7 +210,8 @@ __i915_gem_object_unset_pages(struct
> drm_i915_gem_object *obj)
>         if (i915_gem_object_is_volatile(obj))
>                 obj->mm.madv = I915_MADV_WILLNEED;
>  
> -       i915_gem_object_make_unshrinkable(obj);
> +       if (!i915_gem_object_has_self_managed_shrink_list(obj))
> +               i915_gem_object_make_unshrinkable(obj);
>  
>         if (obj->mm.mapping) {
>                 unmap_object(obj, page_mask_bits(obj->mm.mapping));
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
> b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
> index 66121fedc655..dde0a5c232f8 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
> @@ -497,13 +497,12 @@ void i915_gem_object_make_unshrinkable(struct
> drm_i915_gem_object *obj)
>         spin_unlock_irqrestore(&i915->mm.obj_lock, flags);
>  }
>  
> -static void __i915_gem_object_make_shrinkable(struct
> drm_i915_gem_object *obj,
> -                                             struct list_head *head)
> +static void ___i915_gem_object_make_shrinkable(struct
> drm_i915_gem_object *obj,
> +                                              struct list_head
> *head)
>  {
>         struct drm_i915_private *i915 = obj_to_i915(obj);
>         unsigned long flags;
>  
> -       GEM_BUG_ON(!i915_gem_object_has_pages(obj));
>         if (!i915_gem_object_is_shrinkable(obj))
>                 return;
>  
> @@ -523,6 +522,38 @@ static void
> __i915_gem_object_make_shrinkable(struct drm_i915_gem_object *obj,
>         spin_unlock_irqrestore(&i915->mm.obj_lock, flags);
>  }
>  
> +/**
> + * __i915_gem_object_make_shrinkable - Move the object to the tail
> of the
> + * shrinkable list. Objects on this list might be swapped out. Used
> with
> + * WILLNEED objects.
> + * @obj: The GEM object.
> + *
> + * DO NOT USE. This is intended to be called on very special objects
> that don't
> + * yet have mm.pages, but are guaranteed to have potentially
> reclaimable pages
> + * underneath.
> + */
> +void __i915_gem_object_make_shrinkable(struct drm_i915_gem_object
> *obj)
> +{
> +       ___i915_gem_object_make_shrinkable(obj,
> +                                          &obj_to_i915(obj)-
> >mm.shrink_list);
> +}
> +
> +/**
> + * __i915_gem_object_make_purgeable - Move the object to the tail of
> the
> + * purgeable list. Objects on this list might be swapped out. Used
> with
> + * DONTNEED objects.
> + * @obj: The GEM object.
> + *
> + * DO NOT USE. This is intended to be called on very special objects
> that don't
> + * yet have mm.pages, but are guaranteed to have potentially
> reclaimable pages
> + * underneath.
> + */
> +void __i915_gem_object_make_purgeable(struct drm_i915_gem_object
> *obj)
> +{
> +       ___i915_gem_object_make_shrinkable(obj,
> +                                          &obj_to_i915(obj)-
> >mm.purge_list);
> +}
> +
>  /**
>   * i915_gem_object_make_shrinkable - Move the object to the tail of
> the
>   * shrinkable list. Objects on this list might be swapped out. Used
> with
> @@ -535,8 +566,8 @@ static void
> __i915_gem_object_make_shrinkable(struct drm_i915_gem_object *obj,
>   */
>  void i915_gem_object_make_shrinkable(struct drm_i915_gem_object
> *obj)
>  {
> -       __i915_gem_object_make_shrinkable(obj,
> -                                         &obj_to_i915(obj)-
> >mm.shrink_list);
> +       GEM_BUG_ON(!i915_gem_object_has_pages(obj));
> +       __i915_gem_object_make_shrinkable(obj);
>  }
>  
>  /**
> @@ -552,6 +583,6 @@ void i915_gem_object_make_shrinkable(struct
> drm_i915_gem_object *obj)
>   */
>  void i915_gem_object_make_purgeable(struct drm_i915_gem_object *obj)
>  {
> -       __i915_gem_object_make_shrinkable(obj,
> -                                         &obj_to_i915(obj)-
> >mm.purge_list);
> +       GEM_BUG_ON(!i915_gem_object_has_pages(obj));
> +       __i915_gem_object_make_purgeable(obj);
>  }
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
> b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
> index 0fe1eb8f38e9..8de1031a2c6e 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
> @@ -762,6 +762,7 @@ static int i915_ttm_move(struct ttm_buffer_object
> *bo, bool evict,
>                 obj->ttm.get_io_page.sg_idx = 0;
>         }
>  
> +       i915_ttm_adjust_lru(obj);
>         i915_ttm_adjust_gem_after_move(obj);
>         return 0;
>  }
> @@ -851,7 +852,6 @@ static int __i915_ttm_get_pages(struct
> drm_i915_gem_object *obj,
>                         return i915_ttm_err_to_gem(ret);
>         }
>  
> -       i915_ttm_adjust_lru(obj);
>         if (bo->ttm && !ttm_tt_is_populated(bo->ttm)) {
>                 ret = ttm_tt_populate(bo->bdev, bo->ttm, &ctx);
>                 if (ret)
> @@ -862,19 +862,15 @@ static int __i915_ttm_get_pages(struct
> drm_i915_gem_object *obj,
>         }
>  
>         if (!i915_gem_object_has_pages(obj)) {
> -               struct i915_ttm_tt *i915_tt =
> -                       container_of(bo->ttm, typeof(*i915_tt), ttm);
> -
>                 /* Object either has a page vector or is an iomem
> object */
>                 st = bo->ttm ? i915_ttm_tt_get_st(bo->ttm) : obj-
> >ttm.cached_io_st;
>                 if (IS_ERR(st))
>                         return PTR_ERR(st);
>  
>                 __i915_gem_object_set_pages(obj, st,
> i915_sg_dma_sizes(st->sgl));
> -               if (!bo->ttm || !i915_tt->is_shmem)
> -                       i915_gem_object_make_unshrinkable(obj);
>         }
>  
> +       i915_ttm_adjust_lru(obj);
>         return ret;
>  }
>  
> @@ -945,8 +941,6 @@ static void i915_ttm_put_pages(struct
> drm_i915_gem_object *obj,
>          * If the object is not destroyed next, The TTM eviction
> logic
>          * and shrinkers will move it out if needed.
>          */
> -
> -       i915_ttm_adjust_lru(obj);
>  }
>  
>  static void i915_ttm_adjust_lru(struct drm_i915_gem_object *obj)
> @@ -954,6 +948,8 @@ static void i915_ttm_adjust_lru(struct
> drm_i915_gem_object *obj)
>         struct ttm_buffer_object *bo = i915_gem_to_ttm(obj);
>         struct i915_ttm_tt *i915_tt =
>                 container_of(bo->ttm, typeof(*i915_tt), ttm);
> +       bool shrinkable =
> +               bo->ttm && i915_tt->filp && ttm_tt_is_populated(bo-
> >ttm);
>  
>         /*
>          * Don't manipulate the TTM LRUs while in TTM bo destruction.
> @@ -962,11 +958,40 @@ static void i915_ttm_adjust_lru(struct
> drm_i915_gem_object *obj)
>         if (!kref_read(&bo->kref))
>                 return;
>  
> +       /*
> +        * We skip managing the shrinker LRU in set_pages() and just
> manage
> +        * everything here. This does at least solve the issue with
> having
> +        * temporary shmem mappings(like with evicted lmem) not being
> visible to
> +        * the shrinker. Only our shmem objects are shrinkable,
> everything else
> +        * we keep as unshrinkable.
> +        *
> +        * To make sure everything plays nice we keep an extra shrink
> pin in TTM
> +        * if the underlying pages are not currently shrinkable. Once
> we release
> +        * our pin, like when the pages are moved to shmem, the pages
> will then
> +        * be added to the shrinker LRU, assuming the caller isn't
> also holding
> +        * a pin.
> +        *
> +        * TODO: consider maybe also bumping the shrinker list here
> when we have
> +        * already unpinned it, which should give us something more
> like an LRU.
> +        */
> +       if (shrinkable != obj->mm.ttm_shrinkable) {
> +               if (shrinkable) {
> +                       if (obj->mm.madv == I915_MADV_WILLNEED)
> +                               __i915_gem_object_make_shrinkable(obj
> );
> +                       else
> +                               __i915_gem_object_make_purgeable(obj)
> ;
> +               } else {
> +                       i915_gem_object_make_unshrinkable(obj);
> +               }
> +
> +               obj->mm.ttm_shrinkable = shrinkable;
> +       }
> +
>         /*
>          * Put on the correct LRU list depending on the MADV status
>          */
>         spin_lock(&bo->bdev->lru_lock);
> -       if (bo->ttm && i915_tt->filp) {
> +       if (shrinkable) {
>                 /* Try to keep shmem_tt from being considered for
> shrinking. */
>                 bo->priority = TTM_MAX_BO_PRIORITY - 1;
>         } else if (obj->mm.madv != I915_MADV_WILLNEED) {
> @@ -1010,13 +1035,34 @@ static vm_fault_t vm_fault_ttm(struct
> vm_fault *vmf)
>         struct vm_area_struct *area = vmf->vma;
>         struct drm_i915_gem_object *obj =
>                 i915_ttm_to_gem(area->vm_private_data);
> +       struct ttm_buffer_object *bo = i915_gem_to_ttm(obj);
> +       struct drm_device *dev = bo->base.dev;
> +       vm_fault_t ret;
> +       int idx;
>  
>         /* Sanity check that we allow writing into this object */
>         if (unlikely(i915_gem_object_is_readonly(obj) &&
>                      area->vm_flags & VM_WRITE))
>                 return VM_FAULT_SIGBUS;
>  
> -       return ttm_bo_vm_fault(vmf);
> +       ret = ttm_bo_vm_reserve(bo, vmf);
> +       if (ret)
> +               return ret;
> +
> +       if (drm_dev_enter(dev, &idx)) {
> +               ret = ttm_bo_vm_fault_reserved(vmf, vmf->vma-
> >vm_page_prot,
> +                                             
> TTM_BO_VM_NUM_PREFAULT, 1);
> +               drm_dev_exit(idx);
> +       } else {
> +               ret = ttm_bo_vm_dummy_page(vmf, vmf->vma-
> >vm_page_prot);
> +       }
> +       if (ret == VM_FAULT_RETRY && !(vmf->flags &
> FAULT_FLAG_RETRY_NOWAIT))
> +               return ret;
> +
> +       i915_ttm_adjust_lru(obj);
> +
> +       dma_resv_unlock(bo->base.resv);
> +       return ret;
>  }
>  
>  static int
> @@ -1067,6 +1113,7 @@ static u64 i915_ttm_mmap_offset(struct
> drm_i915_gem_object *obj)
>  
>  static const struct drm_i915_gem_object_ops i915_gem_ttm_obj_ops = {
>         .name = "i915_gem_object_ttm",
> +       .flags = I915_GEM_OBJECT_SELF_MANAGED_SHRINK_LIST,
>  
>         .get_pages = i915_ttm_get_pages,
>         .put_pages = i915_ttm_put_pages,
> @@ -1089,6 +1136,18 @@ void i915_ttm_bo_destroy(struct
> ttm_buffer_object *bo)
>         mutex_destroy(&obj->ttm.get_io_page.lock);
>  
>         if (obj->ttm.created) {
> +               /*
> +                * We freely manage the shrinker LRU outide of the
> mm.pages life
> +                * cycle. As a result when destroying the object we
> should be
> +                * extra paranoid and ensure we remove it from the
> LRU, before
> +                * we free the object.
> +                *
> +                * Touching the ttm_shrinkable outside of the object
> lock here
> +                * should be safe now that the last GEM object ref
> was dropped.
> +                */
> +               if (obj->mm.ttm_shrinkable)
> +                       i915_gem_object_make_unshrinkable(obj);
> +
>                 i915_ttm_backup_free(obj);
>  
>                 /* This releases all gem object bindings to the
> backend. */
> @@ -1141,6 +1200,14 @@ int __i915_gem_ttm_object_init(struct
> intel_memory_region *mem,
>         /* Forcing the page size is kernel internal only */
>         GEM_BUG_ON(page_size && obj->mm.n_placements);
>  
> +       /*
> +        * Keep an extra shrink pin to prevent the object from being
> made
> +        * shrinkable too early. If the ttm_tt is ever allocated in
> shmem, we
> +        * drop the pin. The TTM backend manages the shrinker LRU
> itself,
> +        * outside of the normal mm.pages life cycle.
> +        */
> +       i915_gem_object_make_unshrinkable(obj);
> +
>         /*
>          * If this function fails, it will call the destructor, but
>          * our caller still owns the object. So no freeing in the
diff mbox series

Patch

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index e641db297e0e..3eac8cf2ae10 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -294,6 +294,12 @@  i915_gem_object_is_shrinkable(const struct drm_i915_gem_object *obj)
 	return i915_gem_object_type_has(obj, I915_GEM_OBJECT_IS_SHRINKABLE);
 }
 
+static inline bool
+i915_gem_object_has_self_managed_shrink_list(const struct drm_i915_gem_object *obj)
+{
+	return i915_gem_object_type_has(obj, I915_GEM_OBJECT_SELF_MANAGED_SHRINK_LIST);
+}
+
 static inline bool
 i915_gem_object_is_proxy(const struct drm_i915_gem_object *obj)
 {
@@ -531,6 +537,8 @@  i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj,
 
 void i915_gem_object_make_unshrinkable(struct drm_i915_gem_object *obj);
 void i915_gem_object_make_shrinkable(struct drm_i915_gem_object *obj);
+void __i915_gem_object_make_shrinkable(struct drm_i915_gem_object *obj);
+void __i915_gem_object_make_purgeable(struct drm_i915_gem_object *obj);
 void i915_gem_object_make_purgeable(struct drm_i915_gem_object *obj);
 
 static inline bool cpu_write_needs_clflush(struct drm_i915_gem_object *obj)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
index f4233c4e8d2e..5718a09f5533 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
@@ -34,9 +34,11 @@  struct i915_lut_handle {
 
 struct drm_i915_gem_object_ops {
 	unsigned int flags;
-#define I915_GEM_OBJECT_IS_SHRINKABLE	BIT(1)
-#define I915_GEM_OBJECT_IS_PROXY	BIT(2)
-#define I915_GEM_OBJECT_NO_MMAP		BIT(3)
+#define I915_GEM_OBJECT_IS_SHRINKABLE			BIT(1)
+/* Skip the shrinker management in set_pages/unset_pages */
+#define I915_GEM_OBJECT_SELF_MANAGED_SHRINK_LIST	BIT(2)
+#define I915_GEM_OBJECT_IS_PROXY			BIT(3)
+#define I915_GEM_OBJECT_NO_MMAP				BIT(4)
 
 	/* Interface between the GEM object and its backing storage.
 	 * get_pages() is called once prior to the use of the associated set
@@ -485,6 +487,12 @@  struct drm_i915_gem_object {
 		 */
 		atomic_t shrink_pin;
 
+		/**
+		 * @ttm_shrinkable: True when the object is using shmem pages
+		 * underneath. Protected by the object lock.
+		 */
+		bool ttm_shrinkable;
+
 		/**
 		 * Priority list of potential placements for this object.
 		 */
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
index ea6d9b3d2d6b..308e22a80af4 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
@@ -68,7 +68,7 @@  void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj,
 		shrinkable = false;
 	}
 
-	if (shrinkable) {
+	if (shrinkable && !i915_gem_object_has_self_managed_shrink_list(obj)) {
 		struct list_head *list;
 		unsigned long flags;
 
@@ -210,7 +210,8 @@  __i915_gem_object_unset_pages(struct drm_i915_gem_object *obj)
 	if (i915_gem_object_is_volatile(obj))
 		obj->mm.madv = I915_MADV_WILLNEED;
 
-	i915_gem_object_make_unshrinkable(obj);
+	if (!i915_gem_object_has_self_managed_shrink_list(obj))
+		i915_gem_object_make_unshrinkable(obj);
 
 	if (obj->mm.mapping) {
 		unmap_object(obj, page_mask_bits(obj->mm.mapping));
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
index 66121fedc655..dde0a5c232f8 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
@@ -497,13 +497,12 @@  void i915_gem_object_make_unshrinkable(struct drm_i915_gem_object *obj)
 	spin_unlock_irqrestore(&i915->mm.obj_lock, flags);
 }
 
-static void __i915_gem_object_make_shrinkable(struct drm_i915_gem_object *obj,
-					      struct list_head *head)
+static void ___i915_gem_object_make_shrinkable(struct drm_i915_gem_object *obj,
+					       struct list_head *head)
 {
 	struct drm_i915_private *i915 = obj_to_i915(obj);
 	unsigned long flags;
 
-	GEM_BUG_ON(!i915_gem_object_has_pages(obj));
 	if (!i915_gem_object_is_shrinkable(obj))
 		return;
 
@@ -523,6 +522,38 @@  static void __i915_gem_object_make_shrinkable(struct drm_i915_gem_object *obj,
 	spin_unlock_irqrestore(&i915->mm.obj_lock, flags);
 }
 
+/**
+ * __i915_gem_object_make_shrinkable - Move the object to the tail of the
+ * shrinkable list. Objects on this list might be swapped out. Used with
+ * WILLNEED objects.
+ * @obj: The GEM object.
+ *
+ * DO NOT USE. This is intended to be called on very special objects that don't
+ * yet have mm.pages, but are guaranteed to have potentially reclaimable pages
+ * underneath.
+ */
+void __i915_gem_object_make_shrinkable(struct drm_i915_gem_object *obj)
+{
+	___i915_gem_object_make_shrinkable(obj,
+					   &obj_to_i915(obj)->mm.shrink_list);
+}
+
+/**
+ * __i915_gem_object_make_purgeable - Move the object to the tail of the
+ * purgeable list. Objects on this list might be swapped out. Used with
+ * DONTNEED objects.
+ * @obj: The GEM object.
+ *
+ * DO NOT USE. This is intended to be called on very special objects that don't
+ * yet have mm.pages, but are guaranteed to have potentially reclaimable pages
+ * underneath.
+ */
+void __i915_gem_object_make_purgeable(struct drm_i915_gem_object *obj)
+{
+	___i915_gem_object_make_shrinkable(obj,
+					   &obj_to_i915(obj)->mm.purge_list);
+}
+
 /**
  * i915_gem_object_make_shrinkable - Move the object to the tail of the
  * shrinkable list. Objects on this list might be swapped out. Used with
@@ -535,8 +566,8 @@  static void __i915_gem_object_make_shrinkable(struct drm_i915_gem_object *obj,
  */
 void i915_gem_object_make_shrinkable(struct drm_i915_gem_object *obj)
 {
-	__i915_gem_object_make_shrinkable(obj,
-					  &obj_to_i915(obj)->mm.shrink_list);
+	GEM_BUG_ON(!i915_gem_object_has_pages(obj));
+	__i915_gem_object_make_shrinkable(obj);
 }
 
 /**
@@ -552,6 +583,6 @@  void i915_gem_object_make_shrinkable(struct drm_i915_gem_object *obj)
  */
 void i915_gem_object_make_purgeable(struct drm_i915_gem_object *obj)
 {
-	__i915_gem_object_make_shrinkable(obj,
-					  &obj_to_i915(obj)->mm.purge_list);
+	GEM_BUG_ON(!i915_gem_object_has_pages(obj));
+	__i915_gem_object_make_purgeable(obj);
 }
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
index 0fe1eb8f38e9..8de1031a2c6e 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
@@ -762,6 +762,7 @@  static int i915_ttm_move(struct ttm_buffer_object *bo, bool evict,
 		obj->ttm.get_io_page.sg_idx = 0;
 	}
 
+	i915_ttm_adjust_lru(obj);
 	i915_ttm_adjust_gem_after_move(obj);
 	return 0;
 }
@@ -851,7 +852,6 @@  static int __i915_ttm_get_pages(struct drm_i915_gem_object *obj,
 			return i915_ttm_err_to_gem(ret);
 	}
 
-	i915_ttm_adjust_lru(obj);
 	if (bo->ttm && !ttm_tt_is_populated(bo->ttm)) {
 		ret = ttm_tt_populate(bo->bdev, bo->ttm, &ctx);
 		if (ret)
@@ -862,19 +862,15 @@  static int __i915_ttm_get_pages(struct drm_i915_gem_object *obj,
 	}
 
 	if (!i915_gem_object_has_pages(obj)) {
-		struct i915_ttm_tt *i915_tt =
-			container_of(bo->ttm, typeof(*i915_tt), ttm);
-
 		/* Object either has a page vector or is an iomem object */
 		st = bo->ttm ? i915_ttm_tt_get_st(bo->ttm) : obj->ttm.cached_io_st;
 		if (IS_ERR(st))
 			return PTR_ERR(st);
 
 		__i915_gem_object_set_pages(obj, st, i915_sg_dma_sizes(st->sgl));
-		if (!bo->ttm || !i915_tt->is_shmem)
-			i915_gem_object_make_unshrinkable(obj);
 	}
 
+	i915_ttm_adjust_lru(obj);
 	return ret;
 }
 
@@ -945,8 +941,6 @@  static void i915_ttm_put_pages(struct drm_i915_gem_object *obj,
 	 * If the object is not destroyed next, The TTM eviction logic
 	 * and shrinkers will move it out if needed.
 	 */
-
-	i915_ttm_adjust_lru(obj);
 }
 
 static void i915_ttm_adjust_lru(struct drm_i915_gem_object *obj)
@@ -954,6 +948,8 @@  static void i915_ttm_adjust_lru(struct drm_i915_gem_object *obj)
 	struct ttm_buffer_object *bo = i915_gem_to_ttm(obj);
 	struct i915_ttm_tt *i915_tt =
 		container_of(bo->ttm, typeof(*i915_tt), ttm);
+	bool shrinkable =
+		bo->ttm && i915_tt->filp && ttm_tt_is_populated(bo->ttm);
 
 	/*
 	 * Don't manipulate the TTM LRUs while in TTM bo destruction.
@@ -962,11 +958,40 @@  static void i915_ttm_adjust_lru(struct drm_i915_gem_object *obj)
 	if (!kref_read(&bo->kref))
 		return;
 
+	/*
+	 * We skip managing the shrinker LRU in set_pages() and just manage
+	 * everything here. This does at least solve the issue with having
+	 * temporary shmem mappings(like with evicted lmem) not being visible to
+	 * the shrinker. Only our shmem objects are shrinkable, everything else
+	 * we keep as unshrinkable.
+	 *
+	 * To make sure everything plays nice we keep an extra shrink pin in TTM
+	 * if the underlying pages are not currently shrinkable. Once we release
+	 * our pin, like when the pages are moved to shmem, the pages will then
+	 * be added to the shrinker LRU, assuming the caller isn't also holding
+	 * a pin.
+	 *
+	 * TODO: consider maybe also bumping the shrinker list here when we have
+	 * already unpinned it, which should give us something more like an LRU.
+	 */
+	if (shrinkable != obj->mm.ttm_shrinkable) {
+		if (shrinkable) {
+			if (obj->mm.madv == I915_MADV_WILLNEED)
+				__i915_gem_object_make_shrinkable(obj);
+			else
+				__i915_gem_object_make_purgeable(obj);
+		} else {
+			i915_gem_object_make_unshrinkable(obj);
+		}
+
+		obj->mm.ttm_shrinkable = shrinkable;
+	}
+
 	/*
 	 * Put on the correct LRU list depending on the MADV status
 	 */
 	spin_lock(&bo->bdev->lru_lock);
-	if (bo->ttm && i915_tt->filp) {
+	if (shrinkable) {
 		/* Try to keep shmem_tt from being considered for shrinking. */
 		bo->priority = TTM_MAX_BO_PRIORITY - 1;
 	} else if (obj->mm.madv != I915_MADV_WILLNEED) {
@@ -1010,13 +1035,34 @@  static vm_fault_t vm_fault_ttm(struct vm_fault *vmf)
 	struct vm_area_struct *area = vmf->vma;
 	struct drm_i915_gem_object *obj =
 		i915_ttm_to_gem(area->vm_private_data);
+	struct ttm_buffer_object *bo = i915_gem_to_ttm(obj);
+	struct drm_device *dev = bo->base.dev;
+	vm_fault_t ret;
+	int idx;
 
 	/* Sanity check that we allow writing into this object */
 	if (unlikely(i915_gem_object_is_readonly(obj) &&
 		     area->vm_flags & VM_WRITE))
 		return VM_FAULT_SIGBUS;
 
-	return ttm_bo_vm_fault(vmf);
+	ret = ttm_bo_vm_reserve(bo, vmf);
+	if (ret)
+		return ret;
+
+	if (drm_dev_enter(dev, &idx)) {
+		ret = ttm_bo_vm_fault_reserved(vmf, vmf->vma->vm_page_prot,
+					       TTM_BO_VM_NUM_PREFAULT, 1);
+		drm_dev_exit(idx);
+	} else {
+		ret = ttm_bo_vm_dummy_page(vmf, vmf->vma->vm_page_prot);
+	}
+	if (ret == VM_FAULT_RETRY && !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT))
+		return ret;
+
+	i915_ttm_adjust_lru(obj);
+
+	dma_resv_unlock(bo->base.resv);
+	return ret;
 }
 
 static int
@@ -1067,6 +1113,7 @@  static u64 i915_ttm_mmap_offset(struct drm_i915_gem_object *obj)
 
 static const struct drm_i915_gem_object_ops i915_gem_ttm_obj_ops = {
 	.name = "i915_gem_object_ttm",
+	.flags = I915_GEM_OBJECT_SELF_MANAGED_SHRINK_LIST,
 
 	.get_pages = i915_ttm_get_pages,
 	.put_pages = i915_ttm_put_pages,
@@ -1089,6 +1136,18 @@  void i915_ttm_bo_destroy(struct ttm_buffer_object *bo)
 	mutex_destroy(&obj->ttm.get_io_page.lock);
 
 	if (obj->ttm.created) {
+		/*
+		 * We freely manage the shrinker LRU outide of the mm.pages life
+		 * cycle. As a result when destroying the object we should be
+		 * extra paranoid and ensure we remove it from the LRU, before
+		 * we free the object.
+		 *
+		 * Touching the ttm_shrinkable outside of the object lock here
+		 * should be safe now that the last GEM object ref was dropped.
+		 */
+		if (obj->mm.ttm_shrinkable)
+			i915_gem_object_make_unshrinkable(obj);
+
 		i915_ttm_backup_free(obj);
 
 		/* This releases all gem object bindings to the backend. */
@@ -1141,6 +1200,14 @@  int __i915_gem_ttm_object_init(struct intel_memory_region *mem,
 	/* Forcing the page size is kernel internal only */
 	GEM_BUG_ON(page_size && obj->mm.n_placements);
 
+	/*
+	 * Keep an extra shrink pin to prevent the object from being made
+	 * shrinkable too early. If the ttm_tt is ever allocated in shmem, we
+	 * drop the pin. The TTM backend manages the shrinker LRU itself,
+	 * outside of the normal mm.pages life cycle.
+	 */
+	i915_gem_object_make_unshrinkable(obj);
+
 	/*
 	 * If this function fails, it will call the destructor, but
 	 * our caller still owns the object. So no freeing in the