Message ID | 20181204191229.6851-1-anders.roxell@linaro.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | tracing: add cond_resched to ftrace_replace_code() | expand |
On Tue, 4 Dec 2018 20:12:28 +0100 Anders Roxell <anders.roxell@linaro.org> wrote: > When running in qemu on an kernel built with allmodconfig and debug > options (in particular kcov and ubsan) enabled, ftrace_replace_code > function call take minutes. The ftrace selftest calls > ftrace_replace_code to look >40000 through > ftrace_make_call/ftrace_make_nop, and these end up calling > __aarch64_insn_write/aarch64_insn_patch_text_nosync. > > Microseconds add up because this is called in a loop for each dyn_ftrace > record, and this triggers the softlockup watchdog unless we let it sleep > occasionally. > > Rework so that we call cond_resched() if !irqs_disabled() && !preempt_count(). > > Suggested-by: Steven Rostedt (VMware) <rostedt@goodmis.org> > Signed-off-by: Anders Roxell <anders.roxell@linaro.org> > --- > kernel/trace/ftrace.c | 10 ++++++++++ > 1 file changed, 10 insertions(+) > > diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c > index c375e33239f7..582e3441e318 100644 > --- a/kernel/trace/ftrace.c > +++ b/kernel/trace/ftrace.c > @@ -2419,11 +2419,14 @@ void __weak ftrace_replace_code(int enable) > { > struct dyn_ftrace *rec; > struct ftrace_page *pg; > + bool schedulable; > int failed; > > if (unlikely(ftrace_disabled)) > return; > > + schedulable = !irqs_disabled() && !preempt_count(); > + > do_for_each_ftrace_rec(pg, rec) { > > if (rec->flags & FTRACE_FL_DISABLED) > @@ -2435,6 +2438,13 @@ void __weak ftrace_replace_code(int enable) > /* Stop processing */ > return; > } > + /* > + * Some archs calls this function with interrupts or preemption > + * disabled. Howeve, other archs don't and this can cause a > + * tremendous unneeded latency. > + */ Actually, could you move the comment up where schedulable gets set? Thanks! -- Steve > + if (schedulable) > + cond_resched(); > } while_for_each_ftrace_rec(); > } >
On Tue, 4 Dec 2018 14:19:08 -0500 Steven Rostedt <rostedt@goodmis.org> wrote: > > @@ -2435,6 +2438,13 @@ void __weak ftrace_replace_code(int enable) > > /* Stop processing */ > > return; > > } > > + /* > > + * Some archs calls this function with interrupts or preemption > > + * disabled. Howeve, other archs don't and this can cause a typo "However". But could you write it this way: "However, for other archs that can preempt, this can cause an tremendous unneeded latency." Thanks! -- Steve > > + * tremendous unneeded latency. > > + */ > > Actually, could you move the comment up where schedulable gets set? > > Thanks! > > -- Steve > > > + if (schedulable) > > + cond_resched(); > > } while_for_each_ftrace_rec(); > > } > > >
On Tue, 4 Dec 2018 at 20:21, Steven Rostedt <rostedt@goodmis.org> wrote: > > On Tue, 4 Dec 2018 14:19:08 -0500 > Steven Rostedt <rostedt@goodmis.org> wrote: > > > > @@ -2435,6 +2438,13 @@ void __weak ftrace_replace_code(int enable) > > > /* Stop processing */ > > > return; > > > } > > > + /* > > > + * Some archs calls this function with interrupts or preemption > > > + * disabled. Howeve, other archs don't and this can cause a > > typo "However". But could you write it this way: > > "However, for other archs that can preempt, this can cause an > tremendous unneeded latency." > I'll fix this and move it up (where I added it in the first place but moved it) =) Cheers, Anders > Thanks! > > -- Steve > > > > + * tremendous unneeded latency. > > > + */ > > > > Actually, could you move the comment up where schedulable gets set? > > > > Thanks! > > > > -- Steve > > > > > + if (schedulable) > > > + cond_resched(); > > > } while_for_each_ftrace_rec(); > > > } > > > > > >
On Tue, 4 Dec 2018 20:25:31 +0100 Anders Roxell <anders.roxell@linaro.org> wrote: > On Tue, 4 Dec 2018 at 20:21, Steven Rostedt <rostedt@goodmis.org> wrote: > > > > On Tue, 4 Dec 2018 14:19:08 -0500 > > Steven Rostedt <rostedt@goodmis.org> wrote: > > > > > > @@ -2435,6 +2438,13 @@ void __weak ftrace_replace_code(int enable) > > > > /* Stop processing */ > > > > return; > > > > } > > > > + /* > > > > + * Some archs calls this function with interrupts or preemption > > > > + * disabled. Howeve, other archs don't and this can cause a > > > > typo "However". But could you write it this way: > > > > "However, for other archs that can preempt, this can cause an > > tremendous unneeded latency." > > > > I'll fix this and move it up (where I added it in the first place but > moved it) =) > I also noticed a grammar issue: "Some archs call this function .." -- Steve
On Tue, 4 Dec 2018 at 20:29, Steven Rostedt <rostedt@goodmis.org> wrote: > > On Tue, 4 Dec 2018 20:25:31 +0100 > Anders Roxell <anders.roxell@linaro.org> wrote: > > > On Tue, 4 Dec 2018 at 20:21, Steven Rostedt <rostedt@goodmis.org> wrote: > > > > > > On Tue, 4 Dec 2018 14:19:08 -0500 > > > Steven Rostedt <rostedt@goodmis.org> wrote: > > > > > > > > @@ -2435,6 +2438,13 @@ void __weak ftrace_replace_code(int enable) > > > > > /* Stop processing */ > > > > > return; > > > > > } > > > > > + /* > > > > > + * Some archs calls this function with interrupts or preemption > > > > > + * disabled. Howeve, other archs don't and this can cause a > > > > > > typo "However". But could you write it this way: > > > > > > "However, for other archs that can preempt, this can cause an > > > tremendous unneeded latency." > > > > > > > I'll fix this and move it up (where I added it in the first place but > > moved it) =) > > > > I also noticed a grammar issue: > > "Some archs call this function .." I will fix that. Cheers, Anders
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index c375e33239f7..582e3441e318 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -2419,11 +2419,14 @@ void __weak ftrace_replace_code(int enable) { struct dyn_ftrace *rec; struct ftrace_page *pg; + bool schedulable; int failed; if (unlikely(ftrace_disabled)) return; + schedulable = !irqs_disabled() && !preempt_count(); + do_for_each_ftrace_rec(pg, rec) { if (rec->flags & FTRACE_FL_DISABLED) @@ -2435,6 +2438,13 @@ void __weak ftrace_replace_code(int enable) /* Stop processing */ return; } + /* + * Some archs calls this function with interrupts or preemption + * disabled. Howeve, other archs don't and this can cause a + * tremendous unneeded latency. + */ + if (schedulable) + cond_resched(); } while_for_each_ftrace_rec(); }
When running in qemu on an kernel built with allmodconfig and debug options (in particular kcov and ubsan) enabled, ftrace_replace_code function call take minutes. The ftrace selftest calls ftrace_replace_code to look >40000 through ftrace_make_call/ftrace_make_nop, and these end up calling __aarch64_insn_write/aarch64_insn_patch_text_nosync. Microseconds add up because this is called in a loop for each dyn_ftrace record, and this triggers the softlockup watchdog unless we let it sleep occasionally. Rework so that we call cond_resched() if !irqs_disabled() && !preempt_count(). Suggested-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Anders Roxell <anders.roxell@linaro.org> --- kernel/trace/ftrace.c | 10 ++++++++++ 1 file changed, 10 insertions(+)