diff mbox series

[bpf-next] bpf: Avoid unnecessary -EBUSY from htab_lock_bucket

Message ID 20231003200434.3154797-1-song@kernel.org (mailing list archive)
State Superseded
Delegated to: BPF
Headers show
Series [bpf-next] bpf: Avoid unnecessary -EBUSY from htab_lock_bucket | expand

Checks

Context Check Description
bpf/vmtest-bpf-next-VM_Test-0 success Logs for ShellCheck
bpf/vmtest-bpf-next-VM_Test-5 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-1 success Logs for build for aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-3 success Logs for build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-4 success Logs for build for x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-2 success Logs for build for s390x with gcc
bpf/vmtest-bpf-next-VM_Test-6 success Logs for test_maps on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-8 success Logs for test_maps on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-9 success Logs for test_maps on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-16 success Logs for test_progs_no_alu32 on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-21 success Logs for test_progs_parallel on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-27 success Logs for test_verifier on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-18 success Logs for test_progs_no_alu32_parallel on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-22 success Logs for test_progs_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-25 success Logs for test_verifier on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-20 success Logs for test_progs_no_alu32_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-23 success Logs for test_progs_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-28 success Logs for veristat
bpf/vmtest-bpf-next-VM_Test-19 success Logs for test_progs_no_alu32_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-24 success Logs for test_verifier on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-17 success Logs for test_progs_no_alu32 on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-26 success Logs for test_verifier on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-12 success Logs for test_progs on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-10 success Logs for test_progs on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-13 success Logs for test_progs on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-14 success Logs for test_progs_no_alu32 on aarch64 with gcc
netdev/series_format success Single patches do not need cover letters
netdev/tree_selection success Clearly marked for bpf-next
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 1352 this patch: 1352
netdev/cc_maintainers warning 7 maintainers not CCed: martin.lau@linux.dev jolsa@kernel.org haoluo@google.com sdf@google.com john.fastabend@gmail.com yonghong.song@linux.dev kpsingh@kernel.org
netdev/build_clang success Errors and warnings before: 1364 this patch: 1364
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 1375 this patch: 1375
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 16 lines checked
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0
bpf/vmtest-bpf-next-VM_Test-11 success Logs for test_progs on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-15 success Logs for test_progs_no_alu32 on s390x with gcc
bpf/vmtest-bpf-next-PR success PR summary
bpf/vmtest-bpf-next-VM_Test-7 success Logs for test_maps on s390x with gcc

Commit Message

Song Liu Oct. 3, 2023, 8:04 p.m. UTC
htab_lock_bucket uses the following logic to avoid recursion:

1. preempt_disable();
2. check percpu counter htab->map_locked[hash] for recursion;
   2.1. if map_lock[hash] is already taken, return -BUSY;
3. raw_spin_lock_irqsave();

However, if an IRQ hits between 2 and 3, BPF programs attached to the IRQ
logic will not able to access the same hash of the hashtab and get -EBUSY.
This -EBUSY is not really necessary. Fix it by disabling IRQ before
checking map_locked:

1. preempt_disable();
2. local_irq_save();
3. check percpu counter htab->map_locked[hash] for recursion;
   3.1. if map_lock[hash] is already taken, return -BUSY;
4. raw_spin_lock().

Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Song Liu <song@kernel.org>
---
 kernel/bpf/hashtab.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

Comments

Andrii Nakryiko Oct. 3, 2023, 10:31 p.m. UTC | #1
On Tue, Oct 3, 2023 at 1:05 PM Song Liu <song@kernel.org> wrote:
>
> htab_lock_bucket uses the following logic to avoid recursion:
>
> 1. preempt_disable();
> 2. check percpu counter htab->map_locked[hash] for recursion;
>    2.1. if map_lock[hash] is already taken, return -BUSY;
> 3. raw_spin_lock_irqsave();
>
> However, if an IRQ hits between 2 and 3, BPF programs attached to the IRQ
> logic will not able to access the same hash of the hashtab and get -EBUSY.
> This -EBUSY is not really necessary. Fix it by disabling IRQ before
> checking map_locked:
>
> 1. preempt_disable();
> 2. local_irq_save();
> 3. check percpu counter htab->map_locked[hash] for recursion;
>    3.1. if map_lock[hash] is already taken, return -BUSY;
> 4. raw_spin_lock().
>
> Suggested-by: Tejun Heo <tj@kernel.org>
> Signed-off-by: Song Liu <song@kernel.org>
> ---
>  kernel/bpf/hashtab.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
> index a8c7e1c5abfa..347af4476662 100644
> --- a/kernel/bpf/hashtab.c
> +++ b/kernel/bpf/hashtab.c
> @@ -155,13 +155,15 @@ static inline int htab_lock_bucket(const struct bpf_htab *htab,
>         hash = hash & min_t(u32, HASHTAB_MAP_LOCK_MASK, htab->n_buckets - 1);
>
>         preempt_disable();
> +       local_irq_save(flags);
>         if (unlikely(__this_cpu_inc_return(*(htab->map_locked[hash])) != 1)) {
>                 __this_cpu_dec(*(htab->map_locked[hash]));
> +               local_irq_restore(flags);
>                 preempt_enable();
>                 return -EBUSY;
>         }
>
> -       raw_spin_lock_irqsave(&b->raw_lock, flags);
> +       raw_spin_lock(&b->raw_lock);
>         *pflags = flags;
>

I might be wrong, but I think it's dangerous to have raw_spin_lock() +
raw_spin_unlock_irqrestore() (in htab_unlock_bucket). Looking at the
implementation of raw_spin_lock_irqsave() and
raw_spin_unlock_irqrestore(), they do their own
preempt_disable/preempt_enable, and so with your change I think we
have imbalance, one preempt_disable() in htab_lock_bucket(), but two
preempt_enable (one explicit in htab_unlock_bucket, and one implicit
inside raw_spin_unlock_irqrestore).

I'd say let's use plain raw_spin_unlock() + explicit
local_irq_restore(flags) in htab_unlock_bucket?


>         return 0;
> --
> 2.34.1
>
Song Liu Oct. 3, 2023, 10:37 p.m. UTC | #2
On Tue, Oct 3, 2023 at 3:31 PM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Tue, Oct 3, 2023 at 1:05 PM Song Liu <song@kernel.org> wrote:
> >
> > htab_lock_bucket uses the following logic to avoid recursion:
> >
> > 1. preempt_disable();
> > 2. check percpu counter htab->map_locked[hash] for recursion;
> >    2.1. if map_lock[hash] is already taken, return -BUSY;
> > 3. raw_spin_lock_irqsave();
> >
> > However, if an IRQ hits between 2 and 3, BPF programs attached to the IRQ
> > logic will not able to access the same hash of the hashtab and get -EBUSY.
> > This -EBUSY is not really necessary. Fix it by disabling IRQ before
> > checking map_locked:
> >
> > 1. preempt_disable();
> > 2. local_irq_save();
> > 3. check percpu counter htab->map_locked[hash] for recursion;
> >    3.1. if map_lock[hash] is already taken, return -BUSY;
> > 4. raw_spin_lock().
> >
> > Suggested-by: Tejun Heo <tj@kernel.org>
> > Signed-off-by: Song Liu <song@kernel.org>
> > ---
> >  kernel/bpf/hashtab.c | 4 +++-
> >  1 file changed, 3 insertions(+), 1 deletion(-)
> >
> > diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
> > index a8c7e1c5abfa..347af4476662 100644
> > --- a/kernel/bpf/hashtab.c
> > +++ b/kernel/bpf/hashtab.c
> > @@ -155,13 +155,15 @@ static inline int htab_lock_bucket(const struct bpf_htab *htab,
> >         hash = hash & min_t(u32, HASHTAB_MAP_LOCK_MASK, htab->n_buckets - 1);
> >
> >         preempt_disable();
> > +       local_irq_save(flags);
> >         if (unlikely(__this_cpu_inc_return(*(htab->map_locked[hash])) != 1)) {
> >                 __this_cpu_dec(*(htab->map_locked[hash]));
> > +               local_irq_restore(flags);
> >                 preempt_enable();
> >                 return -EBUSY;
> >         }
> >
> > -       raw_spin_lock_irqsave(&b->raw_lock, flags);
> > +       raw_spin_lock(&b->raw_lock);
> >         *pflags = flags;
> >
>
> I might be wrong, but I think it's dangerous to have raw_spin_lock() +
> raw_spin_unlock_irqrestore() (in htab_unlock_bucket). Looking at the
> implementation of raw_spin_lock_irqsave() and
> raw_spin_unlock_irqrestore(), they do their own
> preempt_disable/preempt_enable, and so with your change I think we
> have imbalance, one preempt_disable() in htab_lock_bucket(), but two
> preempt_enable (one explicit in htab_unlock_bucket, and one implicit
> inside raw_spin_unlock_irqrestore).
>
> I'd say let's use plain raw_spin_unlock() + explicit
> local_irq_restore(flags) in htab_unlock_bucket?

Yeah, there is actually a similar window in htab_unlock_bucket(). Let's
also close that.

Thanks,
Song

>
>
> >         return 0;
> > --
> > 2.34.1
> >
diff mbox series

Patch

diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
index a8c7e1c5abfa..347af4476662 100644
--- a/kernel/bpf/hashtab.c
+++ b/kernel/bpf/hashtab.c
@@ -155,13 +155,15 @@  static inline int htab_lock_bucket(const struct bpf_htab *htab,
 	hash = hash & min_t(u32, HASHTAB_MAP_LOCK_MASK, htab->n_buckets - 1);
 
 	preempt_disable();
+	local_irq_save(flags);
 	if (unlikely(__this_cpu_inc_return(*(htab->map_locked[hash])) != 1)) {
 		__this_cpu_dec(*(htab->map_locked[hash]));
+		local_irq_restore(flags);
 		preempt_enable();
 		return -EBUSY;
 	}
 
-	raw_spin_lock_irqsave(&b->raw_lock, flags);
+	raw_spin_lock(&b->raw_lock);
 	*pflags = flags;
 
 	return 0;