diff mbox series

[7/8] drm/i915: Disable tracing points on PREEMPT_RT

Message ID 20211214140301.520464-8-bigeasy@linutronix.de (mailing list archive)
State New, archived
Headers show
Series drm/i915: PREEMPT_RT related fixups. | expand

Commit Message

Sebastian Andrzej Siewior Dec. 14, 2021, 2:03 p.m. UTC
Luca Abeni reported this:
| BUG: scheduling while atomic: kworker/u8:2/15203/0x00000003
| CPU: 1 PID: 15203 Comm: kworker/u8:2 Not tainted 4.19.1-rt3 #10
| Call Trace:
|  rt_spin_lock+0x3f/0x50
|  gen6_read32+0x45/0x1d0 [i915]
|  g4x_get_vblank_counter+0x36/0x40 [i915]
|  trace_event_raw_event_i915_pipe_update_start+0x7d/0xf0 [i915]

The tracing events use trace_i915_pipe_update_start() among other events
use functions acquire spinlock_t locks which are transformed into
sleeping locks on PREEMPT_RT. A few trace points use
intel_get_crtc_scanline(), others use ->get_vblank_counter() wich also
might acquire a sleeping locks on PREEMPT_RT.
At the time the arguments are evaluated within trace point, preemption
is disabled and so the locks must not be acquired on PREEMPT_RT.

Based on this I don't see any other way than disable trace points on
PREMPT_RT.

Reported-by: Luca Abeni <lucabe72@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 drivers/gpu/drm/i915/i915_trace.h | 4 ++++
 1 file changed, 4 insertions(+)

Comments

Steven Rostedt Dec. 14, 2021, 2:36 p.m. UTC | #1
On Tue, 14 Dec 2021 15:03:00 +0100
Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote:

> Luca Abeni reported this:
> | BUG: scheduling while atomic: kworker/u8:2/15203/0x00000003
> | CPU: 1 PID: 15203 Comm: kworker/u8:2 Not tainted 4.19.1-rt3 #10
> | Call Trace:
> |  rt_spin_lock+0x3f/0x50
> |  gen6_read32+0x45/0x1d0 [i915]
> |  g4x_get_vblank_counter+0x36/0x40 [i915]
> |  trace_event_raw_event_i915_pipe_update_start+0x7d/0xf0 [i915]
> 
> The tracing events use trace_i915_pipe_update_start() among other events
> use functions acquire spinlock_t locks which are transformed into
> sleeping locks on PREEMPT_RT. A few trace points use
> intel_get_crtc_scanline(), others use ->get_vblank_counter() wich also
> might acquire a sleeping locks on PREEMPT_RT.
> At the time the arguments are evaluated within trace point, preemption
> is disabled and so the locks must not be acquired on PREEMPT_RT.
> 
> Based on this I don't see any other way than disable trace points on
> PREMPT_RT.

Another way around this that I can see is if the data for the tracepoints
can fit on the stack and add wrappers around the tracepoints. For example,
looking at the first tracepoint in i915_trace.h:

TRACE_EVENT(intel_pipe_enable,
	    TP_PROTO(struct intel_crtc *crtc),
	    TP_ARGS(crtc),

	    TP_STRUCT__entry(
			     __array(u32, frame, 3)
			     __array(u32, scanline, 3)
			     __field(enum pipe, pipe)
			     ),
	    TP_fast_assign(
			   struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
			   struct intel_crtc *it__;
			   for_each_intel_crtc(&dev_priv->drm, it__) {
				   __entry->frame[it__->pipe] = intel_crtc_get_vblank_counter(it__);
				   __entry->scanline[it__->pipe] = intel_get_crtc_scanline(it__);
			   }
			   __entry->pipe = crtc->pipe;
			   ),

	    TP_printk("pipe %c enable, pipe A: frame=%u, scanline=%u, pipe B: frame=%u, scanline=%u, pipe C: frame=%u, scanline=%u",
		      pipe_name(__entry->pipe),
		      __entry->frame[PIPE_A], __entry->scanline[PIPE_A],
		      __entry->frame[PIPE_B], __entry->scanline[PIPE_B],
		      __entry->frame[PIPE_C], __entry->scanline[PIPE_C])
);

We could modify this to be:

TRACE_EVENT(intel_pipe_enable,
	    TP_PROTO(u32 *frame, u32 *scanline, enum pipe),
	    TP_ARGS(frame, scanline, pipe),

	    TP_STRUCT__entry(
			     __array(u32, frame, 3)
			     __array(u32, scanline, 3)
			     __field(enum pipe, pipe)
			     ),
	    TP_fast_assign(
			   int i;
			   for (i = 0; i < 3; i++) {
			      __entry->frame[i] = frame[i];
			      __entry->scanline[i] = scanline[i];
			   }
			   __entry->pipe = pipe;
			   ),

	    TP_printk("pipe %c enable, pipe A: frame=%u, scanline=%u, pipe B: frame=%u, scanline=%u, pipe C: frame=%u, scanline=%u",
		      pipe_name(__entry->pipe),
		      __entry->frame[PIPE_A], __entry->scanline[PIPE_A],
		      __entry->frame[PIPE_B], __entry->scanline[PIPE_B],
		      __entry->frame[PIPE_C], __entry->scanline[PIPE_C])
);


static inline void do_trace_intel_pipe(struct intel_crtc *crtc)
{
	u32 frame[3];
	u32 scanline[3];
	enum pipe pipe;

	if (!trace_intel_pipe_enable_enabled())
		return;

	struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
	struct intel_crtc *it__;
	for_each_intel_crtc(&dev_priv->drm, it__) {
		frame[it__->pipe] = intel_crtc_get_vblank_counter(it__);
		scanline[it__->pipe] = intel_get_crtc_scanline(it__);
	}

	trace_intel_pipe(frame, scanline, crtc->pipe);
}


The trace_intel_pipe_enable_enabled() is a static_branch that will act the
same as the nop of a trace event, so this will still not add overhead when
not enabled.

All the processing will be done outside the trace event allowing it to be
preempted, and then when the trace event is executed, it will run quickly
without taking any locks.

Then have the code call do_trace_intel_pipe() instead of trace_intel_pipe()
and this should fix the issue with preempt rt.

-- Steve
Jani Nikula Dec. 14, 2021, 3:41 p.m. UTC | #2
On Tue, 14 Dec 2021, Steven Rostedt <rostedt@goodmis.org> wrote:
> On Tue, 14 Dec 2021 15:03:00 +0100
> Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote:
>
>> Luca Abeni reported this:
>> | BUG: scheduling while atomic: kworker/u8:2/15203/0x00000003
>> | CPU: 1 PID: 15203 Comm: kworker/u8:2 Not tainted 4.19.1-rt3 #10
>> | Call Trace:
>> |  rt_spin_lock+0x3f/0x50
>> |  gen6_read32+0x45/0x1d0 [i915]
>> |  g4x_get_vblank_counter+0x36/0x40 [i915]
>> |  trace_event_raw_event_i915_pipe_update_start+0x7d/0xf0 [i915]
>> 
>> The tracing events use trace_i915_pipe_update_start() among other events
>> use functions acquire spinlock_t locks which are transformed into
>> sleeping locks on PREEMPT_RT. A few trace points use
>> intel_get_crtc_scanline(), others use ->get_vblank_counter() wich also
>> might acquire a sleeping locks on PREEMPT_RT.
>> At the time the arguments are evaluated within trace point, preemption
>> is disabled and so the locks must not be acquired on PREEMPT_RT.
>> 
>> Based on this I don't see any other way than disable trace points on
>> PREMPT_RT.
>
> Another way around this that I can see is if the data for the tracepoints
> can fit on the stack and add wrappers around the tracepoints. For example,
> looking at the first tracepoint in i915_trace.h:

FYI display portions of the file have been split to
display/intel_display_trace.[ch] in current drm-intel-next, headed for
v5.17 merge window.

BR,
Jani.


>
> TRACE_EVENT(intel_pipe_enable,
> 	    TP_PROTO(struct intel_crtc *crtc),
> 	    TP_ARGS(crtc),
>
> 	    TP_STRUCT__entry(
> 			     __array(u32, frame, 3)
> 			     __array(u32, scanline, 3)
> 			     __field(enum pipe, pipe)
> 			     ),
> 	    TP_fast_assign(
> 			   struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
> 			   struct intel_crtc *it__;
> 			   for_each_intel_crtc(&dev_priv->drm, it__) {
> 				   __entry->frame[it__->pipe] = intel_crtc_get_vblank_counter(it__);
> 				   __entry->scanline[it__->pipe] = intel_get_crtc_scanline(it__);
> 			   }
> 			   __entry->pipe = crtc->pipe;
> 			   ),
>
> 	    TP_printk("pipe %c enable, pipe A: frame=%u, scanline=%u, pipe B: frame=%u, scanline=%u, pipe C: frame=%u, scanline=%u",
> 		      pipe_name(__entry->pipe),
> 		      __entry->frame[PIPE_A], __entry->scanline[PIPE_A],
> 		      __entry->frame[PIPE_B], __entry->scanline[PIPE_B],
> 		      __entry->frame[PIPE_C], __entry->scanline[PIPE_C])
> );
>
> We could modify this to be:
>
> TRACE_EVENT(intel_pipe_enable,
> 	    TP_PROTO(u32 *frame, u32 *scanline, enum pipe),
> 	    TP_ARGS(frame, scanline, pipe),
>
> 	    TP_STRUCT__entry(
> 			     __array(u32, frame, 3)
> 			     __array(u32, scanline, 3)
> 			     __field(enum pipe, pipe)
> 			     ),
> 	    TP_fast_assign(
> 			   int i;
> 			   for (i = 0; i < 3; i++) {
> 			      __entry->frame[i] = frame[i];
> 			      __entry->scanline[i] = scanline[i];
> 			   }
> 			   __entry->pipe = pipe;
> 			   ),
>
> 	    TP_printk("pipe %c enable, pipe A: frame=%u, scanline=%u, pipe B: frame=%u, scanline=%u, pipe C: frame=%u, scanline=%u",
> 		      pipe_name(__entry->pipe),
> 		      __entry->frame[PIPE_A], __entry->scanline[PIPE_A],
> 		      __entry->frame[PIPE_B], __entry->scanline[PIPE_B],
> 		      __entry->frame[PIPE_C], __entry->scanline[PIPE_C])
> );
>
>
> static inline void do_trace_intel_pipe(struct intel_crtc *crtc)
> {
> 	u32 frame[3];
> 	u32 scanline[3];
> 	enum pipe pipe;
>
> 	if (!trace_intel_pipe_enable_enabled())
> 		return;
>
> 	struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
> 	struct intel_crtc *it__;
> 	for_each_intel_crtc(&dev_priv->drm, it__) {
> 		frame[it__->pipe] = intel_crtc_get_vblank_counter(it__);
> 		scanline[it__->pipe] = intel_get_crtc_scanline(it__);
> 	}
>
> 	trace_intel_pipe(frame, scanline, crtc->pipe);
> }
>
>
> The trace_intel_pipe_enable_enabled() is a static_branch that will act the
> same as the nop of a trace event, so this will still not add overhead when
> not enabled.
>
> All the processing will be done outside the trace event allowing it to be
> preempted, and then when the trace event is executed, it will run quickly
> without taking any locks.
>
> Then have the code call do_trace_intel_pipe() instead of trace_intel_pipe()
> and this should fix the issue with preempt rt.
>
> -- Steve
Sebastian Andrzej Siewior Dec. 14, 2021, 3:56 p.m. UTC | #3
On 2021-12-14 09:36:52 [-0500], Steven Rostedt wrote:
> Another way around this that I can see is if the data for the tracepoints
> can fit on the stack and add wrappers around the tracepoints. For example,
> looking at the first tracepoint in i915_trace.h:
…

Nice.

> We could modify this to be:> static inline void do_trace_intel_pipe(struct intel_crtc *crtc)
> {
> 	u32 frame[3];
> 	u32 scanline[3];
> 	enum pipe pipe;
> 
> 	if (!trace_intel_pipe_enable_enabled())
> 		return;
> 
> 	struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
> 	struct intel_crtc *it__;
> 	for_each_intel_crtc(&dev_priv->drm, it__) {
> 		frame[it__->pipe] = intel_crtc_get_vblank_counter(it__);
> 		scanline[it__->pipe] = intel_get_crtc_scanline(it__);
> 	}
> 
> 	trace_intel_pipe(frame, scanline, crtc->pipe);
> }
> Then have the code call do_trace_intel_pipe() instead of trace_intel_pipe()
> and this should fix the issue with preempt rt.

Is this is something, that the i915 devs would accept?

> -- Steve

Sebastian
Ville Syrjala Dec. 14, 2021, 4:34 p.m. UTC | #4
On Tue, Dec 14, 2021 at 09:36:52AM -0500, Steven Rostedt wrote:
> On Tue, 14 Dec 2021 15:03:00 +0100
> Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote:
> 
> > Luca Abeni reported this:
> > | BUG: scheduling while atomic: kworker/u8:2/15203/0x00000003
> > | CPU: 1 PID: 15203 Comm: kworker/u8:2 Not tainted 4.19.1-rt3 #10
> > | Call Trace:
> > |  rt_spin_lock+0x3f/0x50
> > |  gen6_read32+0x45/0x1d0 [i915]
> > |  g4x_get_vblank_counter+0x36/0x40 [i915]
> > |  trace_event_raw_event_i915_pipe_update_start+0x7d/0xf0 [i915]
> > 
> > The tracing events use trace_i915_pipe_update_start() among other events
> > use functions acquire spinlock_t locks which are transformed into
> > sleeping locks on PREEMPT_RT. A few trace points use
> > intel_get_crtc_scanline(), others use ->get_vblank_counter() wich also
> > might acquire a sleeping locks on PREEMPT_RT.
> > At the time the arguments are evaluated within trace point, preemption
> > is disabled and so the locks must not be acquired on PREEMPT_RT.
> > 
> > Based on this I don't see any other way than disable trace points on
> > PREMPT_RT.
> 
> Another way around this that I can see is if the data for the tracepoints
> can fit on the stack and add wrappers around the tracepoints. For example,
> looking at the first tracepoint in i915_trace.h:
> 
> TRACE_EVENT(intel_pipe_enable,
> 	    TP_PROTO(struct intel_crtc *crtc),
> 	    TP_ARGS(crtc),
> 
> 	    TP_STRUCT__entry(
> 			     __array(u32, frame, 3)
> 			     __array(u32, scanline, 3)
> 			     __field(enum pipe, pipe)
> 			     ),
> 	    TP_fast_assign(
> 			   struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
> 			   struct intel_crtc *it__;
> 			   for_each_intel_crtc(&dev_priv->drm, it__) {
> 				   __entry->frame[it__->pipe] = intel_crtc_get_vblank_counter(it__);
> 				   __entry->scanline[it__->pipe] = intel_get_crtc_scanline(it__);
> 			   }
> 			   __entry->pipe = crtc->pipe;
> 			   ),
> 
> 	    TP_printk("pipe %c enable, pipe A: frame=%u, scanline=%u, pipe B: frame=%u, scanline=%u, pipe C: frame=%u, scanline=%u",
> 		      pipe_name(__entry->pipe),
> 		      __entry->frame[PIPE_A], __entry->scanline[PIPE_A],
> 		      __entry->frame[PIPE_B], __entry->scanline[PIPE_B],
> 		      __entry->frame[PIPE_C], __entry->scanline[PIPE_C])
> );
> 
> We could modify this to be:
> 
> TRACE_EVENT(intel_pipe_enable,
> 	    TP_PROTO(u32 *frame, u32 *scanline, enum pipe),
> 	    TP_ARGS(frame, scanline, pipe),
> 
> 	    TP_STRUCT__entry(
> 			     __array(u32, frame, 3)
> 			     __array(u32, scanline, 3)
> 			     __field(enum pipe, pipe)
> 			     ),
> 	    TP_fast_assign(
> 			   int i;
> 			   for (i = 0; i < 3; i++) {
> 			      __entry->frame[i] = frame[i];
> 			      __entry->scanline[i] = scanline[i];
> 			   }
> 			   __entry->pipe = pipe;
> 			   ),
> 
> 	    TP_printk("pipe %c enable, pipe A: frame=%u, scanline=%u, pipe B: frame=%u, scanline=%u, pipe C: frame=%u, scanline=%u",
> 		      pipe_name(__entry->pipe),
> 		      __entry->frame[PIPE_A], __entry->scanline[PIPE_A],
> 		      __entry->frame[PIPE_B], __entry->scanline[PIPE_B],
> 		      __entry->frame[PIPE_C], __entry->scanline[PIPE_C])
> );
> 
> 
> static inline void do_trace_intel_pipe(struct intel_crtc *crtc)
> {
> 	u32 frame[3];
> 	u32 scanline[3];
> 	enum pipe pipe;
> 
> 	if (!trace_intel_pipe_enable_enabled())
> 		return;
> 
> 	struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
> 	struct intel_crtc *it__;
> 	for_each_intel_crtc(&dev_priv->drm, it__) {
> 		frame[it__->pipe] = intel_crtc_get_vblank_counter(it__);
> 		scanline[it__->pipe] = intel_get_crtc_scanline(it__);
> 	}
> 
> 	trace_intel_pipe(frame, scanline, crtc->pipe);
> }

Looks lightly tedious. Can't we have "slow" (or whatever) versions of
the trace macros so we could just declare these the same was as before
without having to manually write that wrapper for every event?
Steven Rostedt Dec. 14, 2021, 4:58 p.m. UTC | #5
On Tue, 14 Dec 2021 18:34:50 +0200
Ville Syrjälä <ville.syrjala@linux.intel.com> wrote:

> Looks lightly tedious. Can't we have "slow" (or whatever) versions of
> the trace macros so we could just declare these the same was as before
> without having to manually write that wrapper for every event?

That would be quite tedious as well ;-)

There's a couple of problems with doing it as a macro. One is that the data
would need to be saved on stack. There's no guarantee that there will be
enough stack available. We could probably add a way to limit the size. That
is, adding something like:

#define MAX_SLOW_TRACE_ENTRY_SIZE	256

	BUILD_BUG_ON(sizeof(trace_event_raw_##call) > MAX_SLOW_TRACE_ENTRY_SIZE);

and then have the entry done outside the trace event. But even with that,
this is specific to the perf and ftrace code, and not to the trace point
that is called. We would need to figure out a way to make the macro work
for all the users.

It may be possible to do, but it will be far from trivial, and I'm not sure
I want this to be an easy option. Locks should not be taken from trace
events in general, as they are not tested with lockdep when the trace
points are not enabled, and could hide deadlocks that may not appear until
running on production.

-- Steve
Sebastian Andrzej Siewior Feb. 8, 2022, 5:32 p.m. UTC | #6
On 2021-12-14 11:58:37 [-0500], Steven Rostedt wrote:
> On Tue, 14 Dec 2021 18:34:50 +0200
> Ville Syrjälä <ville.syrjala@linux.intel.com> wrote:
> 
> > Looks lightly tedious. Can't we have "slow" (or whatever) versions of
> > the trace macros so we could just declare these the same was as before
> > without having to manually write that wrapper for every event?
> 
> That would be quite tedious as well ;-)> It may be possible to do, but it will be far from trivial, and I'm not sure
> I want this to be an easy option. Locks should not be taken from trace
> events in general, as they are not tested with lockdep when the trace
> points are not enabled, and could hide deadlocks that may not appear until
> running on production.

So we disable the tracing points or try what Steven suggested?

> -- Steve

Sebastian
diff mbox series

Patch

diff --git a/drivers/gpu/drm/i915/i915_trace.h b/drivers/gpu/drm/i915/i915_trace.h
index 8104981a66044..64c3fa7cc05df 100644
--- a/drivers/gpu/drm/i915/i915_trace.h
+++ b/drivers/gpu/drm/i915/i915_trace.h
@@ -2,6 +2,10 @@ 
 #if !defined(_I915_TRACE_H_) || defined(TRACE_HEADER_MULTI_READ)
 #define _I915_TRACE_H_
 
+#ifdef CONFIG_PREEMPT_RT
+#define NOTRACE
+#endif
+
 #include <linux/stringify.h>
 #include <linux/types.h>
 #include <linux/tracepoint.h>