Message ID | 20231003200434.3154797-1-song@kernel.org (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | BPF |
Headers | show |
Series | [bpf-next] bpf: Avoid unnecessary -EBUSY from htab_lock_bucket | expand |
On Tue, Oct 3, 2023 at 1:05 PM Song Liu <song@kernel.org> wrote: > > htab_lock_bucket uses the following logic to avoid recursion: > > 1. preempt_disable(); > 2. check percpu counter htab->map_locked[hash] for recursion; > 2.1. if map_lock[hash] is already taken, return -BUSY; > 3. raw_spin_lock_irqsave(); > > However, if an IRQ hits between 2 and 3, BPF programs attached to the IRQ > logic will not able to access the same hash of the hashtab and get -EBUSY. > This -EBUSY is not really necessary. Fix it by disabling IRQ before > checking map_locked: > > 1. preempt_disable(); > 2. local_irq_save(); > 3. check percpu counter htab->map_locked[hash] for recursion; > 3.1. if map_lock[hash] is already taken, return -BUSY; > 4. raw_spin_lock(). > > Suggested-by: Tejun Heo <tj@kernel.org> > Signed-off-by: Song Liu <song@kernel.org> > --- > kernel/bpf/hashtab.c | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > > diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c > index a8c7e1c5abfa..347af4476662 100644 > --- a/kernel/bpf/hashtab.c > +++ b/kernel/bpf/hashtab.c > @@ -155,13 +155,15 @@ static inline int htab_lock_bucket(const struct bpf_htab *htab, > hash = hash & min_t(u32, HASHTAB_MAP_LOCK_MASK, htab->n_buckets - 1); > > preempt_disable(); > + local_irq_save(flags); > if (unlikely(__this_cpu_inc_return(*(htab->map_locked[hash])) != 1)) { > __this_cpu_dec(*(htab->map_locked[hash])); > + local_irq_restore(flags); > preempt_enable(); > return -EBUSY; > } > > - raw_spin_lock_irqsave(&b->raw_lock, flags); > + raw_spin_lock(&b->raw_lock); > *pflags = flags; > I might be wrong, but I think it's dangerous to have raw_spin_lock() + raw_spin_unlock_irqrestore() (in htab_unlock_bucket). Looking at the implementation of raw_spin_lock_irqsave() and raw_spin_unlock_irqrestore(), they do their own preempt_disable/preempt_enable, and so with your change I think we have imbalance, one preempt_disable() in htab_lock_bucket(), but two preempt_enable (one explicit in htab_unlock_bucket, and one implicit inside raw_spin_unlock_irqrestore). I'd say let's use plain raw_spin_unlock() + explicit local_irq_restore(flags) in htab_unlock_bucket? > return 0; > -- > 2.34.1 >
On Tue, Oct 3, 2023 at 3:31 PM Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote: > > On Tue, Oct 3, 2023 at 1:05 PM Song Liu <song@kernel.org> wrote: > > > > htab_lock_bucket uses the following logic to avoid recursion: > > > > 1. preempt_disable(); > > 2. check percpu counter htab->map_locked[hash] for recursion; > > 2.1. if map_lock[hash] is already taken, return -BUSY; > > 3. raw_spin_lock_irqsave(); > > > > However, if an IRQ hits between 2 and 3, BPF programs attached to the IRQ > > logic will not able to access the same hash of the hashtab and get -EBUSY. > > This -EBUSY is not really necessary. Fix it by disabling IRQ before > > checking map_locked: > > > > 1. preempt_disable(); > > 2. local_irq_save(); > > 3. check percpu counter htab->map_locked[hash] for recursion; > > 3.1. if map_lock[hash] is already taken, return -BUSY; > > 4. raw_spin_lock(). > > > > Suggested-by: Tejun Heo <tj@kernel.org> > > Signed-off-by: Song Liu <song@kernel.org> > > --- > > kernel/bpf/hashtab.c | 4 +++- > > 1 file changed, 3 insertions(+), 1 deletion(-) > > > > diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c > > index a8c7e1c5abfa..347af4476662 100644 > > --- a/kernel/bpf/hashtab.c > > +++ b/kernel/bpf/hashtab.c > > @@ -155,13 +155,15 @@ static inline int htab_lock_bucket(const struct bpf_htab *htab, > > hash = hash & min_t(u32, HASHTAB_MAP_LOCK_MASK, htab->n_buckets - 1); > > > > preempt_disable(); > > + local_irq_save(flags); > > if (unlikely(__this_cpu_inc_return(*(htab->map_locked[hash])) != 1)) { > > __this_cpu_dec(*(htab->map_locked[hash])); > > + local_irq_restore(flags); > > preempt_enable(); > > return -EBUSY; > > } > > > > - raw_spin_lock_irqsave(&b->raw_lock, flags); > > + raw_spin_lock(&b->raw_lock); > > *pflags = flags; > > > > I might be wrong, but I think it's dangerous to have raw_spin_lock() + > raw_spin_unlock_irqrestore() (in htab_unlock_bucket). Looking at the > implementation of raw_spin_lock_irqsave() and > raw_spin_unlock_irqrestore(), they do their own > preempt_disable/preempt_enable, and so with your change I think we > have imbalance, one preempt_disable() in htab_lock_bucket(), but two > preempt_enable (one explicit in htab_unlock_bucket, and one implicit > inside raw_spin_unlock_irqrestore). > > I'd say let's use plain raw_spin_unlock() + explicit > local_irq_restore(flags) in htab_unlock_bucket? Yeah, there is actually a similar window in htab_unlock_bucket(). Let's also close that. Thanks, Song > > > > return 0; > > -- > > 2.34.1 > >
diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c index a8c7e1c5abfa..347af4476662 100644 --- a/kernel/bpf/hashtab.c +++ b/kernel/bpf/hashtab.c @@ -155,13 +155,15 @@ static inline int htab_lock_bucket(const struct bpf_htab *htab, hash = hash & min_t(u32, HASHTAB_MAP_LOCK_MASK, htab->n_buckets - 1); preempt_disable(); + local_irq_save(flags); if (unlikely(__this_cpu_inc_return(*(htab->map_locked[hash])) != 1)) { __this_cpu_dec(*(htab->map_locked[hash])); + local_irq_restore(flags); preempt_enable(); return -EBUSY; } - raw_spin_lock_irqsave(&b->raw_lock, flags); + raw_spin_lock(&b->raw_lock); *pflags = flags; return 0;
htab_lock_bucket uses the following logic to avoid recursion: 1. preempt_disable(); 2. check percpu counter htab->map_locked[hash] for recursion; 2.1. if map_lock[hash] is already taken, return -BUSY; 3. raw_spin_lock_irqsave(); However, if an IRQ hits between 2 and 3, BPF programs attached to the IRQ logic will not able to access the same hash of the hashtab and get -EBUSY. This -EBUSY is not really necessary. Fix it by disabling IRQ before checking map_locked: 1. preempt_disable(); 2. local_irq_save(); 3. check percpu counter htab->map_locked[hash] for recursion; 3.1. if map_lock[hash] is already taken, return -BUSY; 4. raw_spin_lock(). Suggested-by: Tejun Heo <tj@kernel.org> Signed-off-by: Song Liu <song@kernel.org> --- kernel/bpf/hashtab.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)