Message ID | 20230821193311.3290257-7-davemarchevsky@fb.com (mailing list archive) |
---|---|
State | Accepted |
Commit | 5861d1e8dbc4e1a03ebffb96ac041026cdd34c07 |
Delegated to: | BPF |
Headers | show |
Series | BPF Refcount followups 3: bpf_mem_free_rcu refcounted nodes | expand |
On 8/21/23 12:33 PM, Dave Marchevsky wrote: > Commit 9e7a4d9831e8 ("bpf: Allow LSM programs to use bpf spin locks") > disabled bpf_spin_lock usage in sleepable progs, stating: > > Sleepable LSM programs can be preempted which means that allowng spin > locks will need more work (disabling preemption and the verifier > ensuring that no sleepable helpers are called when a spin lock is > held). > > This patch disables preemption before grabbing bpf_spin_lock. The second > requirement above "no sleepable helpers are called when a spin lock is > held" is implicitly enforced by current verifier logic due to helper > calls in spin_lock CS being disabled except for a few exceptions, none > of which sleep. > > Due to above preemption changes, bpf_spin_lock CS can also be considered > a RCU CS, so verifier's in_rcu_cs check is modified to account for this. > > Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com> > --- > kernel/bpf/helpers.c | 2 ++ > kernel/bpf/verifier.c | 9 +++------ > 2 files changed, 5 insertions(+), 6 deletions(-) > > diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c > index 945a85e25ac5..8bd3812fb8df 100644 > --- a/kernel/bpf/helpers.c > +++ b/kernel/bpf/helpers.c > @@ -286,6 +286,7 @@ static inline void __bpf_spin_lock(struct bpf_spin_lock *lock) > compiletime_assert(u.val == 0, "__ARCH_SPIN_LOCK_UNLOCKED not 0"); > BUILD_BUG_ON(sizeof(*l) != sizeof(__u32)); > BUILD_BUG_ON(sizeof(*lock) != sizeof(__u32)); > + preempt_disable(); > arch_spin_lock(l); > } > > @@ -294,6 +295,7 @@ static inline void __bpf_spin_unlock(struct bpf_spin_lock *lock) > arch_spinlock_t *l = (void *)lock; > > arch_spin_unlock(l); > + preempt_enable(); > } preempt_disable()/preempt_enable() is not needed. Is it possible we can have a different bpf_spin_lock proto, e.g, bpf_spin_lock_sleepable_proto which implements the above with preempt_disable()/preempt_enable()? Not sure how much difference my proposal will make since current bpf_spin_lock() region does not support func calls except some graph api kfunc operations. > > #else > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > index 55607ab30522..33e4b854d2d4 100644 > --- a/kernel/bpf/verifier.c > +++ b/kernel/bpf/verifier.c > @@ -5062,7 +5062,9 @@ static int map_kptr_match_type(struct bpf_verifier_env *env, > */ > static bool in_rcu_cs(struct bpf_verifier_env *env) > { > - return env->cur_state->active_rcu_lock || !env->prog->aux->sleepable; > + return env->cur_state->active_rcu_lock || > + env->cur_state->active_lock.ptr || > + !env->prog->aux->sleepable; > } > > /* Once GCC supports btf_type_tag the following mechanism will be replaced with tag check */ > @@ -16980,11 +16982,6 @@ static int check_map_prog_compatibility(struct bpf_verifier_env *env, > verbose(env, "tracing progs cannot use bpf_spin_lock yet\n"); > return -EINVAL; > } > - > - if (prog->aux->sleepable) { > - verbose(env, "sleepable progs cannot use bpf_spin_lock yet\n"); > - return -EINVAL; > - } > } > > if (btf_record_has_field(map->record, BPF_TIMER)) {
On Mon, Aug 21, 2023 at 07:53:22PM -0700, Yonghong Song wrote: > > > On 8/21/23 12:33 PM, Dave Marchevsky wrote: > > Commit 9e7a4d9831e8 ("bpf: Allow LSM programs to use bpf spin locks") > > disabled bpf_spin_lock usage in sleepable progs, stating: > > > > Sleepable LSM programs can be preempted which means that allowng spin > > locks will need more work (disabling preemption and the verifier > > ensuring that no sleepable helpers are called when a spin lock is > > held). > > > > This patch disables preemption before grabbing bpf_spin_lock. The second > > requirement above "no sleepable helpers are called when a spin lock is > > held" is implicitly enforced by current verifier logic due to helper > > calls in spin_lock CS being disabled except for a few exceptions, none > > of which sleep. > > > > Due to above preemption changes, bpf_spin_lock CS can also be considered > > a RCU CS, so verifier's in_rcu_cs check is modified to account for this. > > > > Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com> > > --- > > kernel/bpf/helpers.c | 2 ++ > > kernel/bpf/verifier.c | 9 +++------ > > 2 files changed, 5 insertions(+), 6 deletions(-) > > > > diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c > > index 945a85e25ac5..8bd3812fb8df 100644 > > --- a/kernel/bpf/helpers.c > > +++ b/kernel/bpf/helpers.c > > @@ -286,6 +286,7 @@ static inline void __bpf_spin_lock(struct bpf_spin_lock *lock) > > compiletime_assert(u.val == 0, "__ARCH_SPIN_LOCK_UNLOCKED not 0"); > > BUILD_BUG_ON(sizeof(*l) != sizeof(__u32)); > > BUILD_BUG_ON(sizeof(*lock) != sizeof(__u32)); > > + preempt_disable(); > > arch_spin_lock(l); > > } > > @@ -294,6 +295,7 @@ static inline void __bpf_spin_unlock(struct bpf_spin_lock *lock) > > arch_spinlock_t *l = (void *)lock; > > arch_spin_unlock(l); > > + preempt_enable(); > > } > > preempt_disable()/preempt_enable() is not needed. Is it possible we can preempt_disable is needed in all cases. This mistake slipped in when we converted preempt disabled bpf progs into migrate disabled. For example, see how raw_spin_lock is doing it.
On 8/22/23 12:46 PM, Alexei Starovoitov wrote: > On Mon, Aug 21, 2023 at 07:53:22PM -0700, Yonghong Song wrote: >> >> >> On 8/21/23 12:33 PM, Dave Marchevsky wrote: >>> Commit 9e7a4d9831e8 ("bpf: Allow LSM programs to use bpf spin locks") >>> disabled bpf_spin_lock usage in sleepable progs, stating: >>> >>> Sleepable LSM programs can be preempted which means that allowng spin >>> locks will need more work (disabling preemption and the verifier >>> ensuring that no sleepable helpers are called when a spin lock is >>> held). >>> >>> This patch disables preemption before grabbing bpf_spin_lock. The second >>> requirement above "no sleepable helpers are called when a spin lock is >>> held" is implicitly enforced by current verifier logic due to helper >>> calls in spin_lock CS being disabled except for a few exceptions, none >>> of which sleep. >>> >>> Due to above preemption changes, bpf_spin_lock CS can also be considered >>> a RCU CS, so verifier's in_rcu_cs check is modified to account for this. >>> >>> Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com> >>> --- >>> kernel/bpf/helpers.c | 2 ++ >>> kernel/bpf/verifier.c | 9 +++------ >>> 2 files changed, 5 insertions(+), 6 deletions(-) >>> >>> diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c >>> index 945a85e25ac5..8bd3812fb8df 100644 >>> --- a/kernel/bpf/helpers.c >>> +++ b/kernel/bpf/helpers.c >>> @@ -286,6 +286,7 @@ static inline void __bpf_spin_lock(struct bpf_spin_lock *lock) >>> compiletime_assert(u.val == 0, "__ARCH_SPIN_LOCK_UNLOCKED not 0"); >>> BUILD_BUG_ON(sizeof(*l) != sizeof(__u32)); >>> BUILD_BUG_ON(sizeof(*lock) != sizeof(__u32)); >>> + preempt_disable(); >>> arch_spin_lock(l); >>> } >>> @@ -294,6 +295,7 @@ static inline void __bpf_spin_unlock(struct bpf_spin_lock *lock) >>> arch_spinlock_t *l = (void *)lock; >>> arch_spin_unlock(l); >>> + preempt_enable(); >>> } >> >> preempt_disable()/preempt_enable() is not needed. Is it possible we can > > preempt_disable is needed in all cases. This mistake slipped in when > we converted preempt disabled bpf progs into migrate disabled. > For example, see how raw_spin_lock is doing it. Okay, a slipped bug. That explains the difference between our bpf_spin_lock and raw_spin_lock. The change then makes sense.
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index 945a85e25ac5..8bd3812fb8df 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -286,6 +286,7 @@ static inline void __bpf_spin_lock(struct bpf_spin_lock *lock) compiletime_assert(u.val == 0, "__ARCH_SPIN_LOCK_UNLOCKED not 0"); BUILD_BUG_ON(sizeof(*l) != sizeof(__u32)); BUILD_BUG_ON(sizeof(*lock) != sizeof(__u32)); + preempt_disable(); arch_spin_lock(l); } @@ -294,6 +295,7 @@ static inline void __bpf_spin_unlock(struct bpf_spin_lock *lock) arch_spinlock_t *l = (void *)lock; arch_spin_unlock(l); + preempt_enable(); } #else diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 55607ab30522..33e4b854d2d4 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -5062,7 +5062,9 @@ static int map_kptr_match_type(struct bpf_verifier_env *env, */ static bool in_rcu_cs(struct bpf_verifier_env *env) { - return env->cur_state->active_rcu_lock || !env->prog->aux->sleepable; + return env->cur_state->active_rcu_lock || + env->cur_state->active_lock.ptr || + !env->prog->aux->sleepable; } /* Once GCC supports btf_type_tag the following mechanism will be replaced with tag check */ @@ -16980,11 +16982,6 @@ static int check_map_prog_compatibility(struct bpf_verifier_env *env, verbose(env, "tracing progs cannot use bpf_spin_lock yet\n"); return -EINVAL; } - - if (prog->aux->sleepable) { - verbose(env, "sleepable progs cannot use bpf_spin_lock yet\n"); - return -EINVAL; - } } if (btf_record_has_field(map->record, BPF_TIMER)) {
Commit 9e7a4d9831e8 ("bpf: Allow LSM programs to use bpf spin locks") disabled bpf_spin_lock usage in sleepable progs, stating: Sleepable LSM programs can be preempted which means that allowng spin locks will need more work (disabling preemption and the verifier ensuring that no sleepable helpers are called when a spin lock is held). This patch disables preemption before grabbing bpf_spin_lock. The second requirement above "no sleepable helpers are called when a spin lock is held" is implicitly enforced by current verifier logic due to helper calls in spin_lock CS being disabled except for a few exceptions, none of which sleep. Due to above preemption changes, bpf_spin_lock CS can also be considered a RCU CS, so verifier's in_rcu_cs check is modified to account for this. Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com> --- kernel/bpf/helpers.c | 2 ++ kernel/bpf/verifier.c | 9 +++------ 2 files changed, 5 insertions(+), 6 deletions(-)