Message ID | 20240417011021.600889-5-mcanal@igalia.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | drm/v3d: Fix GPU stats inconsistencies and race-condition | expand |
On 17/04/2024 01:53, Maíra Canal wrote: > In V3D, the conclusion of a job is indicated by a IRQ. When a job > finishes, then we update the local and the global GPU stats of that > queue. But, while the GPU stats are being updated, a user might be > reading the stats from sysfs or fdinfo. > > For example, on `gpu_stats_show()`, we could think about a scenario where > `v3d->queue[queue].start_ns != 0`, then an interruption happens, we update interrupt > the value of `v3d->queue[queue].start_ns` to 0, we come back to > `gpu_stats_show()` to calculate `active_runtime` and now, > `active_runtime = timestamp`. > > In this simple example, the user would see a spike in the queue usage, > that didn't matches reality. match > In order to address this issue properly, use a seqcount to protect read > and write sections of the code. > > Fixes: 09a93cc4f7d1 ("drm/v3d: Implement show_fdinfo() callback for GPU usage stats") > Reported-by: Tvrtko Ursulin <tursulin@igalia.com> > Signed-off-by: Maíra Canal <mcanal@igalia.com> > --- > drivers/gpu/drm/v3d/v3d_drv.c | 10 ++++++---- > drivers/gpu/drm/v3d/v3d_drv.h | 21 +++++++++++++++++++++ > drivers/gpu/drm/v3d/v3d_gem.c | 7 +++++-- > drivers/gpu/drm/v3d/v3d_sched.c | 7 +++++++ > drivers/gpu/drm/v3d/v3d_sysfs.c | 11 +++-------- > 5 files changed, 42 insertions(+), 14 deletions(-) > > diff --git a/drivers/gpu/drm/v3d/v3d_drv.c b/drivers/gpu/drm/v3d/v3d_drv.c > index 52e3ba9df46f..cf15fa142968 100644 > --- a/drivers/gpu/drm/v3d/v3d_drv.c > +++ b/drivers/gpu/drm/v3d/v3d_drv.c > @@ -121,6 +121,7 @@ v3d_open(struct drm_device *dev, struct drm_file *file) > 1, NULL); > > memset(&v3d_priv->stats[i], 0, sizeof(v3d_priv->stats[i])); > + seqcount_init(&v3d_priv->stats[i].lock); > } > > v3d_perfmon_open_file(v3d_priv); > @@ -150,20 +151,21 @@ static void v3d_show_fdinfo(struct drm_printer *p, struct drm_file *file) > > for (queue = 0; queue < V3D_MAX_QUEUES; queue++) { > struct v3d_stats *stats = &file_priv->stats[queue]; > + u64 active_runtime, jobs_completed; > + > + v3d_get_stats(stats, timestamp, &active_runtime, &jobs_completed); > > /* Note that, in case of a GPU reset, the time spent during an > * attempt of executing the job is not computed in the runtime. > */ > drm_printf(p, "drm-engine-%s: \t%llu ns\n", > - v3d_queue_to_string(queue), > - stats->start_ns ? stats->enabled_ns + timestamp - stats->start_ns > - : stats->enabled_ns); > + v3d_queue_to_string(queue), active_runtime); > > /* Note that we only count jobs that completed. Therefore, jobs > * that were resubmitted due to a GPU reset are not computed. > */ > drm_printf(p, "v3d-jobs-%s: \t%llu jobs\n", > - v3d_queue_to_string(queue), stats->jobs_completed); > + v3d_queue_to_string(queue), jobs_completed); > } > } > > diff --git a/drivers/gpu/drm/v3d/v3d_drv.h b/drivers/gpu/drm/v3d/v3d_drv.h > index 5a198924d568..5211df7c7317 100644 > --- a/drivers/gpu/drm/v3d/v3d_drv.h > +++ b/drivers/gpu/drm/v3d/v3d_drv.h > @@ -40,8 +40,29 @@ struct v3d_stats { > u64 start_ns; > u64 enabled_ns; > u64 jobs_completed; > + > + /* > + * This seqcount is used to protect the access to the GPU stats > + * variables. It must be used as, while we are reading the stats, > + * IRQs can happen and the stats can be updated. > + */ > + seqcount_t lock; > }; > > +static inline void v3d_get_stats(const struct v3d_stats *stats, u64 timestamp, > + u64 *active_runtime, u64 *jobs_completed) > +{ > + unsigned int seq; > + > + do { > + seq = read_seqcount_begin(&stats->lock); > + *active_runtime = stats->enabled_ns; > + if (stats->start_ns) > + *active_runtime += timestamp - stats->start_ns; > + *jobs_completed = stats->jobs_completed; > + } while (read_seqcount_retry(&stats->lock, seq)); > +} Patch reads clean and obviously correct to me. Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com> The only possible discussion point I see is whether v3d_get_stats could have been introduced first to avoid mixing pure refactors with functionality, and whether it deserves to be in a header, or could be a function call in v3d_drv.c just as well. No strong opinion from me, since it is your driver your preference. Regards, Tvrtko > + > struct v3d_queue_state { > struct drm_gpu_scheduler sched; > > diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c > index d14589d3ae6c..da8faf3b9011 100644 > --- a/drivers/gpu/drm/v3d/v3d_gem.c > +++ b/drivers/gpu/drm/v3d/v3d_gem.c > @@ -247,8 +247,11 @@ v3d_gem_init(struct drm_device *dev) > int ret, i; > > for (i = 0; i < V3D_MAX_QUEUES; i++) { > - v3d->queue[i].fence_context = dma_fence_context_alloc(1); > - memset(&v3d->queue[i].stats, 0, sizeof(v3d->queue[i].stats)); > + struct v3d_queue_state *queue = &v3d->queue[i]; > + > + queue->fence_context = dma_fence_context_alloc(1); > + memset(&queue->stats, 0, sizeof(queue->stats)); > + seqcount_init(&queue->stats.lock); > } > > spin_lock_init(&v3d->mm_lock); > diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c > index b9614944931c..7cd8c335cd9b 100644 > --- a/drivers/gpu/drm/v3d/v3d_sched.c > +++ b/drivers/gpu/drm/v3d/v3d_sched.c > @@ -114,16 +114,23 @@ v3d_job_start_stats(struct v3d_job *job, enum v3d_queue queue) > struct v3d_stats *local_stats = &file->stats[queue]; > u64 now = local_clock(); > > + write_seqcount_begin(&local_stats->lock); > local_stats->start_ns = now; > + write_seqcount_end(&local_stats->lock); > + > + write_seqcount_begin(&global_stats->lock); > global_stats->start_ns = now; > + write_seqcount_end(&global_stats->lock); > } > > static void > v3d_stats_update(struct v3d_stats *stats, u64 now) > { > + write_seqcount_begin(&stats->lock); > stats->enabled_ns += now - stats->start_ns; > stats->jobs_completed++; > stats->start_ns = 0; > + write_seqcount_end(&stats->lock); > } > > void > diff --git a/drivers/gpu/drm/v3d/v3d_sysfs.c b/drivers/gpu/drm/v3d/v3d_sysfs.c > index 6a8e7acc8b82..d610e355964f 100644 > --- a/drivers/gpu/drm/v3d/v3d_sysfs.c > +++ b/drivers/gpu/drm/v3d/v3d_sysfs.c > @@ -15,18 +15,15 @@ gpu_stats_show(struct device *dev, struct device_attribute *attr, char *buf) > struct v3d_dev *v3d = to_v3d_dev(drm); > enum v3d_queue queue; > u64 timestamp = local_clock(); > - u64 active_runtime; > ssize_t len = 0; > > len += sysfs_emit(buf, "queue\ttimestamp\tjobs\truntime\n"); > > for (queue = 0; queue < V3D_MAX_QUEUES; queue++) { > struct v3d_stats *stats = &v3d->queue[queue].stats; > + u64 active_runtime, jobs_completed; > > - if (stats->start_ns) > - active_runtime = timestamp - stats->start_ns; > - else > - active_runtime = 0; > + v3d_get_stats(stats, timestamp, &active_runtime, &jobs_completed); > > /* Each line will display the queue name, timestamp, the number > * of jobs sent to that queue and the runtime, as can be seem here: > @@ -40,9 +37,7 @@ gpu_stats_show(struct device *dev, struct device_attribute *attr, char *buf) > */ > len += sysfs_emit_at(buf, len, "%s\t%llu\t%llu\t%llu\n", > v3d_queue_to_string(queue), > - timestamp, > - stats->jobs_completed, > - stats->enabled_ns + active_runtime); > + timestamp, jobs_completed, active_runtime); > } > > return len;
diff --git a/drivers/gpu/drm/v3d/v3d_drv.c b/drivers/gpu/drm/v3d/v3d_drv.c index 52e3ba9df46f..cf15fa142968 100644 --- a/drivers/gpu/drm/v3d/v3d_drv.c +++ b/drivers/gpu/drm/v3d/v3d_drv.c @@ -121,6 +121,7 @@ v3d_open(struct drm_device *dev, struct drm_file *file) 1, NULL); memset(&v3d_priv->stats[i], 0, sizeof(v3d_priv->stats[i])); + seqcount_init(&v3d_priv->stats[i].lock); } v3d_perfmon_open_file(v3d_priv); @@ -150,20 +151,21 @@ static void v3d_show_fdinfo(struct drm_printer *p, struct drm_file *file) for (queue = 0; queue < V3D_MAX_QUEUES; queue++) { struct v3d_stats *stats = &file_priv->stats[queue]; + u64 active_runtime, jobs_completed; + + v3d_get_stats(stats, timestamp, &active_runtime, &jobs_completed); /* Note that, in case of a GPU reset, the time spent during an * attempt of executing the job is not computed in the runtime. */ drm_printf(p, "drm-engine-%s: \t%llu ns\n", - v3d_queue_to_string(queue), - stats->start_ns ? stats->enabled_ns + timestamp - stats->start_ns - : stats->enabled_ns); + v3d_queue_to_string(queue), active_runtime); /* Note that we only count jobs that completed. Therefore, jobs * that were resubmitted due to a GPU reset are not computed. */ drm_printf(p, "v3d-jobs-%s: \t%llu jobs\n", - v3d_queue_to_string(queue), stats->jobs_completed); + v3d_queue_to_string(queue), jobs_completed); } } diff --git a/drivers/gpu/drm/v3d/v3d_drv.h b/drivers/gpu/drm/v3d/v3d_drv.h index 5a198924d568..5211df7c7317 100644 --- a/drivers/gpu/drm/v3d/v3d_drv.h +++ b/drivers/gpu/drm/v3d/v3d_drv.h @@ -40,8 +40,29 @@ struct v3d_stats { u64 start_ns; u64 enabled_ns; u64 jobs_completed; + + /* + * This seqcount is used to protect the access to the GPU stats + * variables. It must be used as, while we are reading the stats, + * IRQs can happen and the stats can be updated. + */ + seqcount_t lock; }; +static inline void v3d_get_stats(const struct v3d_stats *stats, u64 timestamp, + u64 *active_runtime, u64 *jobs_completed) +{ + unsigned int seq; + + do { + seq = read_seqcount_begin(&stats->lock); + *active_runtime = stats->enabled_ns; + if (stats->start_ns) + *active_runtime += timestamp - stats->start_ns; + *jobs_completed = stats->jobs_completed; + } while (read_seqcount_retry(&stats->lock, seq)); +} + struct v3d_queue_state { struct drm_gpu_scheduler sched; diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c index d14589d3ae6c..da8faf3b9011 100644 --- a/drivers/gpu/drm/v3d/v3d_gem.c +++ b/drivers/gpu/drm/v3d/v3d_gem.c @@ -247,8 +247,11 @@ v3d_gem_init(struct drm_device *dev) int ret, i; for (i = 0; i < V3D_MAX_QUEUES; i++) { - v3d->queue[i].fence_context = dma_fence_context_alloc(1); - memset(&v3d->queue[i].stats, 0, sizeof(v3d->queue[i].stats)); + struct v3d_queue_state *queue = &v3d->queue[i]; + + queue->fence_context = dma_fence_context_alloc(1); + memset(&queue->stats, 0, sizeof(queue->stats)); + seqcount_init(&queue->stats.lock); } spin_lock_init(&v3d->mm_lock); diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c index b9614944931c..7cd8c335cd9b 100644 --- a/drivers/gpu/drm/v3d/v3d_sched.c +++ b/drivers/gpu/drm/v3d/v3d_sched.c @@ -114,16 +114,23 @@ v3d_job_start_stats(struct v3d_job *job, enum v3d_queue queue) struct v3d_stats *local_stats = &file->stats[queue]; u64 now = local_clock(); + write_seqcount_begin(&local_stats->lock); local_stats->start_ns = now; + write_seqcount_end(&local_stats->lock); + + write_seqcount_begin(&global_stats->lock); global_stats->start_ns = now; + write_seqcount_end(&global_stats->lock); } static void v3d_stats_update(struct v3d_stats *stats, u64 now) { + write_seqcount_begin(&stats->lock); stats->enabled_ns += now - stats->start_ns; stats->jobs_completed++; stats->start_ns = 0; + write_seqcount_end(&stats->lock); } void diff --git a/drivers/gpu/drm/v3d/v3d_sysfs.c b/drivers/gpu/drm/v3d/v3d_sysfs.c index 6a8e7acc8b82..d610e355964f 100644 --- a/drivers/gpu/drm/v3d/v3d_sysfs.c +++ b/drivers/gpu/drm/v3d/v3d_sysfs.c @@ -15,18 +15,15 @@ gpu_stats_show(struct device *dev, struct device_attribute *attr, char *buf) struct v3d_dev *v3d = to_v3d_dev(drm); enum v3d_queue queue; u64 timestamp = local_clock(); - u64 active_runtime; ssize_t len = 0; len += sysfs_emit(buf, "queue\ttimestamp\tjobs\truntime\n"); for (queue = 0; queue < V3D_MAX_QUEUES; queue++) { struct v3d_stats *stats = &v3d->queue[queue].stats; + u64 active_runtime, jobs_completed; - if (stats->start_ns) - active_runtime = timestamp - stats->start_ns; - else - active_runtime = 0; + v3d_get_stats(stats, timestamp, &active_runtime, &jobs_completed); /* Each line will display the queue name, timestamp, the number * of jobs sent to that queue and the runtime, as can be seem here: @@ -40,9 +37,7 @@ gpu_stats_show(struct device *dev, struct device_attribute *attr, char *buf) */ len += sysfs_emit_at(buf, len, "%s\t%llu\t%llu\t%llu\n", v3d_queue_to_string(queue), - timestamp, - stats->jobs_completed, - stats->enabled_ns + active_runtime); + timestamp, jobs_completed, active_runtime); } return len;
In V3D, the conclusion of a job is indicated by a IRQ. When a job finishes, then we update the local and the global GPU stats of that queue. But, while the GPU stats are being updated, a user might be reading the stats from sysfs or fdinfo. For example, on `gpu_stats_show()`, we could think about a scenario where `v3d->queue[queue].start_ns != 0`, then an interruption happens, we update the value of `v3d->queue[queue].start_ns` to 0, we come back to `gpu_stats_show()` to calculate `active_runtime` and now, `active_runtime = timestamp`. In this simple example, the user would see a spike in the queue usage, that didn't matches reality. In order to address this issue properly, use a seqcount to protect read and write sections of the code. Fixes: 09a93cc4f7d1 ("drm/v3d: Implement show_fdinfo() callback for GPU usage stats") Reported-by: Tvrtko Ursulin <tursulin@igalia.com> Signed-off-by: Maíra Canal <mcanal@igalia.com> --- drivers/gpu/drm/v3d/v3d_drv.c | 10 ++++++---- drivers/gpu/drm/v3d/v3d_drv.h | 21 +++++++++++++++++++++ drivers/gpu/drm/v3d/v3d_gem.c | 7 +++++-- drivers/gpu/drm/v3d/v3d_sched.c | 7 +++++++ drivers/gpu/drm/v3d/v3d_sysfs.c | 11 +++-------- 5 files changed, 42 insertions(+), 14 deletions(-)