Message ID | 19512bb03eed27ced5abeb5bd03f9a8381742cb1.1675111415.git.andreyknvl@google.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | lib/stackdepot: fixes and clean-ups | expand |
On Mon, 30 Jan 2023 at 21:51, <andrey.konovalov@linux.dev> wrote: > > From: Andrey Konovalov <andreyknvl@google.com> > > Accesses to slab_index are protected by slab_lock everywhere except > in a sanity check in stack_depot_fetch. The read access there can race > with the write access in depot_alloc_stack. > > Use WRITE/READ_ONCE() to annotate the racy accesses. > > As the sanity check is only used to print a warning in case of a > violation of the stack depot interface usage, it does not make a lot > of sense to use proper synchronization. > > Signed-off-by: Andrey Konovalov <andreyknvl@google.com> > --- > lib/stackdepot.c | 13 +++++++++---- > 1 file changed, 9 insertions(+), 4 deletions(-) > > diff --git a/lib/stackdepot.c b/lib/stackdepot.c > index f291ad6a4e72..cc2fe8563af4 100644 > --- a/lib/stackdepot.c > +++ b/lib/stackdepot.c > @@ -269,8 +269,11 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc) > return NULL; > } > > - /* Move on to the next slab. */ > - slab_index++; > + /* > + * Move on to the next slab. > + * WRITE_ONCE annotates a race with stack_depot_fetch. "Pairs with potential concurrent read in stack_depot_fetch()." would be clearer. I wouldn't say WRITE_ONCE annotates a race (race = involves 2+ accesses, but here's just 1), it just marks this access here which itself is paired with the potential racing read in the other function. > + */ > + WRITE_ONCE(slab_index, slab_index + 1); > slab_offset = 0; > /* > * smp_store_release() here pairs with smp_load_acquire() in > @@ -492,6 +495,8 @@ unsigned int stack_depot_fetch(depot_stack_handle_t handle, > unsigned long **entries) > { > union handle_parts parts = { .handle = handle }; > + /* READ_ONCE annotates a race with depot_alloc_stack. */ > + int slab_index_cached = READ_ONCE(slab_index); > void *slab; > size_t offset = parts.offset << DEPOT_STACK_ALIGN; > struct stack_record *stack; > @@ -500,9 +505,9 @@ unsigned int stack_depot_fetch(depot_stack_handle_t handle, > if (!handle) > return 0; > > - if (parts.slab_index > slab_index) { > + if (parts.slab_index > slab_index_cached) { > WARN(1, "slab index %d out of bounds (%d) for stack id %08x\n", > - parts.slab_index, slab_index, handle); > + parts.slab_index, slab_index_cached, handle); > return 0; > } > slab = stack_slabs[parts.slab_index]; > -- > 2.25.1 >
On Tue, Jan 31, 2023 at 9:41 AM Marco Elver <elver@google.com> wrote: > > > diff --git a/lib/stackdepot.c b/lib/stackdepot.c > > index f291ad6a4e72..cc2fe8563af4 100644 > > --- a/lib/stackdepot.c > > +++ b/lib/stackdepot.c > > @@ -269,8 +269,11 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc) > > return NULL; > > } > > > > - /* Move on to the next slab. */ > > - slab_index++; > > + /* > > + * Move on to the next slab. > > + * WRITE_ONCE annotates a race with stack_depot_fetch. > > "Pairs with potential concurrent read in stack_depot_fetch()." would be clearer. > > I wouldn't say WRITE_ONCE annotates a race (race = involves 2+ > accesses, but here's just 1), it just marks this access here which > itself is paired with the potential racing read in the other function. Will do in v2. Thanks!
On Tue, 31 Jan 2023 19:57:58 +0100 Andrey Konovalov <andreyknvl@gmail.com> wrote: > On Tue, Jan 31, 2023 at 9:41 AM Marco Elver <elver@google.com> wrote: > > > > > diff --git a/lib/stackdepot.c b/lib/stackdepot.c > > > index f291ad6a4e72..cc2fe8563af4 100644 > > > --- a/lib/stackdepot.c > > > +++ b/lib/stackdepot.c > > > @@ -269,8 +269,11 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc) > > > return NULL; > > > } > > > > > > - /* Move on to the next slab. */ > > > - slab_index++; > > > + /* > > > + * Move on to the next slab. > > > + * WRITE_ONCE annotates a race with stack_depot_fetch. > > > > "Pairs with potential concurrent read in stack_depot_fetch()." would be clearer. > > > > I wouldn't say WRITE_ONCE annotates a race (race = involves 2+ > > accesses, but here's just 1), it just marks this access here which > > itself is paired with the potential racing read in the other function. > > Will do in v2. Thanks! Please let's not redo an 18-patch series for a single line comment change. If there are more substantial changes then OK. I queued this as a to-be-squashed fixup against "/stackdepot: annotate racy slab_index accesses": From: Andrew Morton <akpm@linux-foundation.org> Subject: lib-stackdepot-annotate-racy-slab_index-accesses-fix Date: Tue Jan 31 01:10:50 PM PST 2023 enhance comment, per Marco Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Konovalov <andreyknvl@google.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Marco Elver <elver@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- lib/stackdepot.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) --- a/lib/stackdepot.c~lib-stackdepot-annotate-racy-slab_index-accesses-fix +++ a/lib/stackdepot.c @@ -271,7 +271,8 @@ depot_alloc_stack(unsigned long *entries /* * Move on to the next slab. - * WRITE_ONCE annotates a race with stack_depot_fetch. + * WRITE_ONCE pairs with potential concurrent read in + * stack_depot_fetch(). */ WRITE_ONCE(slab_index, slab_index + 1); slab_offset = 0;
diff --git a/lib/stackdepot.c b/lib/stackdepot.c index f291ad6a4e72..cc2fe8563af4 100644 --- a/lib/stackdepot.c +++ b/lib/stackdepot.c @@ -269,8 +269,11 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc) return NULL; } - /* Move on to the next slab. */ - slab_index++; + /* + * Move on to the next slab. + * WRITE_ONCE annotates a race with stack_depot_fetch. + */ + WRITE_ONCE(slab_index, slab_index + 1); slab_offset = 0; /* * smp_store_release() here pairs with smp_load_acquire() in @@ -492,6 +495,8 @@ unsigned int stack_depot_fetch(depot_stack_handle_t handle, unsigned long **entries) { union handle_parts parts = { .handle = handle }; + /* READ_ONCE annotates a race with depot_alloc_stack. */ + int slab_index_cached = READ_ONCE(slab_index); void *slab; size_t offset = parts.offset << DEPOT_STACK_ALIGN; struct stack_record *stack; @@ -500,9 +505,9 @@ unsigned int stack_depot_fetch(depot_stack_handle_t handle, if (!handle) return 0; - if (parts.slab_index > slab_index) { + if (parts.slab_index > slab_index_cached) { WARN(1, "slab index %d out of bounds (%d) for stack id %08x\n", - parts.slab_index, slab_index, handle); + parts.slab_index, slab_index_cached, handle); return 0; } slab = stack_slabs[parts.slab_index];