Message ID | 20240525023744.390040466@goodmis.org (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | [01/20] function_graph: Convert ret_stack to a series of longs | expand |
On Fri, 24 May 2024 22:37:12 -0400 Steven Rostedt <rostedt@goodmis.org> wrote: > From: "Steven Rostedt (Google)" <rostedt@goodmis.org> > > Instead of looping through all the elements of fgraph_array[] to see if > there's an gops attached to one and then calling its gops->func(). Create > a fgraph_array_bitmask that sets bits when an index in the array is > reserved (via the simple lru algorithm). Then only the bits set in this > bitmask needs to be looked at where only elements in the array that have > ops registered need to be looked at. > > Note, we do not care about races. If a bit is set before the gops is > assigned, it only wastes time looking at the element and ignoring it (as > it did before this bitmask is added). This is OK because anyway we check gops == &fgraph_stub. By the way, shouldn't we also make "if (gops == &fgraph_stub)" check unlikely()? This change looks good to me. Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Thank you, > > Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org> > --- > kernel/trace/fgraph.c | 8 +++++++- > 1 file changed, 7 insertions(+), 1 deletion(-) > > diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c > index 5e8e13ffcfb6..1aae521e5997 100644 > --- a/kernel/trace/fgraph.c > +++ b/kernel/trace/fgraph.c > @@ -173,6 +173,7 @@ DEFINE_STATIC_KEY_FALSE(kill_ftrace_graph); > int ftrace_graph_active; > > static struct fgraph_ops *fgraph_array[FGRAPH_ARRAY_SIZE]; > +static unsigned long fgraph_array_bitmask; > > /* LRU index table for fgraph_array */ > static int fgraph_lru_table[FGRAPH_ARRAY_SIZE]; > @@ -197,6 +198,8 @@ static int fgraph_lru_release_index(int idx) > > fgraph_lru_table[fgraph_lru_last] = idx; > fgraph_lru_last = (fgraph_lru_last + 1) % FGRAPH_ARRAY_SIZE; > + > + clear_bit(idx, &fgraph_array_bitmask); > return 0; > } > > @@ -211,6 +214,8 @@ static int fgraph_lru_alloc_index(void) > > fgraph_lru_table[fgraph_lru_next] = -1; > fgraph_lru_next = (fgraph_lru_next + 1) % FGRAPH_ARRAY_SIZE; > + > + set_bit(idx, &fgraph_array_bitmask); > return idx; > } > > @@ -632,7 +637,8 @@ int function_graph_enter(unsigned long ret, unsigned long func, > if (offset < 0) > goto out; > > - for (i = 0; i < FGRAPH_ARRAY_SIZE; i++) { > + for_each_set_bit(i, &fgraph_array_bitmask, > + sizeof(fgraph_array_bitmask) * BITS_PER_BYTE) { > struct fgraph_ops *gops = fgraph_array[i]; > int save_curr_ret_stack; > > -- > 2.43.0 > >
On Mon, 27 May 2024 09:09:49 +0900 Masami Hiramatsu (Google) <mhiramat@kernel.org> wrote: > > Note, we do not care about races. If a bit is set before the gops is > > assigned, it only wastes time looking at the element and ignoring it (as > > it did before this bitmask is added). > > This is OK because anyway we check gops == &fgraph_stub. > By the way, shouldn't we also make "if (gops == &fgraph_stub)" > check unlikely()? Yeah, I'll add the unlikely() here too. > > This change looks good to me. > > Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Thanks for the review Masami! -- Steve
diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c index 5e8e13ffcfb6..1aae521e5997 100644 --- a/kernel/trace/fgraph.c +++ b/kernel/trace/fgraph.c @@ -173,6 +173,7 @@ DEFINE_STATIC_KEY_FALSE(kill_ftrace_graph); int ftrace_graph_active; static struct fgraph_ops *fgraph_array[FGRAPH_ARRAY_SIZE]; +static unsigned long fgraph_array_bitmask; /* LRU index table for fgraph_array */ static int fgraph_lru_table[FGRAPH_ARRAY_SIZE]; @@ -197,6 +198,8 @@ static int fgraph_lru_release_index(int idx) fgraph_lru_table[fgraph_lru_last] = idx; fgraph_lru_last = (fgraph_lru_last + 1) % FGRAPH_ARRAY_SIZE; + + clear_bit(idx, &fgraph_array_bitmask); return 0; } @@ -211,6 +214,8 @@ static int fgraph_lru_alloc_index(void) fgraph_lru_table[fgraph_lru_next] = -1; fgraph_lru_next = (fgraph_lru_next + 1) % FGRAPH_ARRAY_SIZE; + + set_bit(idx, &fgraph_array_bitmask); return idx; } @@ -632,7 +637,8 @@ int function_graph_enter(unsigned long ret, unsigned long func, if (offset < 0) goto out; - for (i = 0; i < FGRAPH_ARRAY_SIZE; i++) { + for_each_set_bit(i, &fgraph_array_bitmask, + sizeof(fgraph_array_bitmask) * BITS_PER_BYTE) { struct fgraph_ops *gops = fgraph_array[i]; int save_curr_ret_stack;