Message ID | 20230607081032.19275-1-nirmoy.das@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v3] drm/i915: Fix a VMA UAF for multi-gt platform | expand |
Thank you, Nirmoy. We have verified it on Chrome. Tested-by: Sushma Venkatesh Reddy <sushma.venkatesh.reddy@intel.com> -----Original Message----- From: Das, Nirmoy <nirmoy.das@intel.com> Sent: Wednesday, June 7, 2023 1:11 AM To: intel-gfx@lists.freedesktop.org Cc: dri-devel@lists.freedesktop.org; Das, Nirmoy <nirmoy.das@intel.com>; Joonas Lahtinen <joonas.lahtinen@linux.intel.com>; Vivi, Rodrigo <rodrigo.vivi@intel.com>; Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>; Thomas Hellström <thomas.hellstrom@linux.intel.com>; Wilson, Chris P <chris.p.wilson@intel.com>; Andi Shyti <andi.shyti@linux.intel.com>; Hajda, Andrzej <andrzej.hajda@intel.com>; Venkatesh Reddy, Sushma <sushma.venkatesh.reddy@intel.com> Subject: [PATCH v3] drm/i915: Fix a VMA UAF for multi-gt platform Ensure correct handling of closed VMAs on multi-gt platforms to prevent Use-After-Free. Currently, when GT0 goes idle, closed VMAs that are exclusively added to GT0's closed_vma link (gt->closed_vma) and subsequently freed by i915_vma_parked(), which assumes the entire GPU is idle. However, on platforms with multiple GTs, such as MTL, GT1 may remain active while GT0 is idle. This causes GT0 to mistakenly consider the closed VMAs in its closed_vma list as unnecessary, potentially leading to Use-After-Free issues if a job for GT1 attempts to access a freed VMA. Although we do take a wakeref for GT0 but it happens later, after evaluating VMAs. To mitigate this, it is necessary to hold a GT0 wakeref early. v2: Use gt id to detect multi-tile(Andi) Fix the incorrect error path. v3: Add more comment(Andi) Use the new gt var when possible(Andrzej) Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Chris Wilson <chris.p.wilson@intel.com> Cc: Andi Shyti <andi.shyti@linux.intel.com> Cc: Andrzej Hajda <andrzej.hajda@intel.com> Cc: Sushma Venkatesh Reddy <sushma.venkatesh.reddy@intel.com> Signed-off-by: Nirmoy Das <nirmoy.das@intel.com> Tested-by: Andi Shyti <andi.shyti@linux.intel.com> Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com> Reviewed-by: Andrzej Hajda <andrzej.hajda@intel.com> --- .../gpu/drm/i915/gem/i915_gem_execbuffer.c | 21 +++++++++++++++++-- 1 file changed, 19 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index 5fb459ea4294..1de9de1e4782 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -2692,6 +2692,7 @@ static int eb_select_engine(struct i915_execbuffer *eb) { struct intel_context *ce, *child; + struct intel_gt *gt; unsigned int idx; int err; @@ -2715,10 +2716,17 @@ eb_select_engine(struct i915_execbuffer *eb) } } eb->num_batches = ce->parallel.number_children + 1; + gt = ce->engine->gt; for_each_child(ce, child) intel_context_get(child); - intel_gt_pm_get(ce->engine->gt); + intel_gt_pm_get(gt); + /* + * Keep GT0 active on MTL so that i915_vma_parked() doesn't + * free VMAs while execbuf ioctl is validating VMAs. + */ + if (gt->info.id) + intel_gt_pm_get(to_gt(gt->i915)); if (!test_bit(CONTEXT_ALLOC_BIT, &ce->flags)) { err = intel_context_alloc_state(ce); @@ -2757,7 +2765,10 @@ eb_select_engine(struct i915_execbuffer *eb) return err; err: - intel_gt_pm_put(ce->engine->gt); + if (gt->info.id) + intel_gt_pm_put(to_gt(gt->i915)); + + intel_gt_pm_put(gt); for_each_child(ce, child) intel_context_put(child); intel_context_put(ce); @@ -2770,6 +2781,12 @@ eb_put_engine(struct i915_execbuffer *eb) struct intel_context *child; i915_vm_put(eb->context->vm); + /* + * This works in conjunction with eb_select_engine() to prevent + * i915_vma_parked() from interfering while execbuf validates vmas. + */ + if (eb->gt->info.id) + intel_gt_pm_put(to_gt(eb->gt->i915)); intel_gt_pm_put(eb->gt); for_each_child(eb->context, child) intel_context_put(child); -- 2.39.0
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index 5fb459ea4294..1de9de1e4782 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -2692,6 +2692,7 @@ static int eb_select_engine(struct i915_execbuffer *eb) { struct intel_context *ce, *child; + struct intel_gt *gt; unsigned int idx; int err; @@ -2715,10 +2716,17 @@ eb_select_engine(struct i915_execbuffer *eb) } } eb->num_batches = ce->parallel.number_children + 1; + gt = ce->engine->gt; for_each_child(ce, child) intel_context_get(child); - intel_gt_pm_get(ce->engine->gt); + intel_gt_pm_get(gt); + /* + * Keep GT0 active on MTL so that i915_vma_parked() doesn't + * free VMAs while execbuf ioctl is validating VMAs. + */ + if (gt->info.id) + intel_gt_pm_get(to_gt(gt->i915)); if (!test_bit(CONTEXT_ALLOC_BIT, &ce->flags)) { err = intel_context_alloc_state(ce); @@ -2757,7 +2765,10 @@ eb_select_engine(struct i915_execbuffer *eb) return err; err: - intel_gt_pm_put(ce->engine->gt); + if (gt->info.id) + intel_gt_pm_put(to_gt(gt->i915)); + + intel_gt_pm_put(gt); for_each_child(ce, child) intel_context_put(child); intel_context_put(ce); @@ -2770,6 +2781,12 @@ eb_put_engine(struct i915_execbuffer *eb) struct intel_context *child; i915_vm_put(eb->context->vm); + /* + * This works in conjunction with eb_select_engine() to prevent + * i915_vma_parked() from interfering while execbuf validates vmas. + */ + if (eb->gt->info.id) + intel_gt_pm_put(to_gt(eb->gt->i915)); intel_gt_pm_put(eb->gt); for_each_child(eb->context, child) intel_context_put(child);