Message ID | 20201112211313.2587383-1-kafai@fb.com (mailing list archive) |
---|---|
State | Accepted |
Commit | 8e4597c627fb48f361e2a5b012202cb1b6cbcd5e |
Delegated to: | BPF |
Headers | show |
Series | bpf: Enable bpf_sk_storage for FENTRY/FEXIT/RAW_TP | expand |
Context | Check | Description |
---|---|---|
netdev/cover_letter | success | Link |
netdev/fixes_present | success | Link |
netdev/patch_count | success | Link |
netdev/tree_selection | success | Clearly marked for bpf-next |
netdev/subject_prefix | success | Link |
netdev/source_inline | success | Was 0 now: 0 |
netdev/verify_signedoff | success | Link |
netdev/module_param | success | Was 0 now: 0 |
netdev/build_32bit | fail | Errors and warnings before: 101 this patch: 101 |
netdev/kdoc | success | Errors and warnings before: 0 this patch: 0 |
netdev/verify_fixes | success | Link |
netdev/checkpatch | fail | Link |
netdev/build_allmodconfig_warn | success | Errors and warnings before: 97 this patch: 97 |
netdev/header_inline | success | Link |
netdev/stable | success | Stable not CCed |
On Thu, 12 Nov 2020 13:13:13 -0800 Martin KaFai Lau wrote: > This patch adds bpf_sk_storage_get_tracing_proto and > bpf_sk_storage_delete_tracing_proto. They will check > in runtime that the helpers can only be called when serving > softirq or running in a task context. That should enable > most common tracing use cases on sk. > + if (!in_serving_softirq() && !in_task()) This is a curious combination of checks. Would you mind indulging me with an explanation?
On Sat, Nov 14, 2020 at 05:17:20PM -0800, Jakub Kicinski wrote: > On Thu, 12 Nov 2020 13:13:13 -0800 Martin KaFai Lau wrote: > > This patch adds bpf_sk_storage_get_tracing_proto and > > bpf_sk_storage_delete_tracing_proto. They will check > > in runtime that the helpers can only be called when serving > > softirq or running in a task context. That should enable > > most common tracing use cases on sk. > > > + if (!in_serving_softirq() && !in_task()) > > This is a curious combination of checks. Would you mind indulging me > with an explanation? The current lock usage in bpf_local_storage.c is only expected to run in either of these contexts.
On Mon, 16 Nov 2020 09:37:34 -0800 Martin KaFai Lau wrote: > On Sat, Nov 14, 2020 at 05:17:20PM -0800, Jakub Kicinski wrote: > > On Thu, 12 Nov 2020 13:13:13 -0800 Martin KaFai Lau wrote: > > > This patch adds bpf_sk_storage_get_tracing_proto and > > > bpf_sk_storage_delete_tracing_proto. They will check > > > in runtime that the helpers can only be called when serving > > > softirq or running in a task context. That should enable > > > most common tracing use cases on sk. > > > > > + if (!in_serving_softirq() && !in_task()) > > > > This is a curious combination of checks. Would you mind indulging me > > with an explanation? > The current lock usage in bpf_local_storage.c is only expected to > run in either of these contexts. :) Locks that can run in any context but preempt disabled or softirq disabled? Let me cut to the chase. Are you sure you didn't mean to check if (irq_count()) ?
On Mon, 16 Nov 2020 10:00:04 -0800 Jakub Kicinski wrote:
> irq_count()
Umpf. I meant (in_irq() || in_nmi()), don't care about sirq.
On Mon, Nov 16, 2020 at 10:00:04AM -0800, Jakub Kicinski wrote: > On Mon, 16 Nov 2020 09:37:34 -0800 Martin KaFai Lau wrote: > > On Sat, Nov 14, 2020 at 05:17:20PM -0800, Jakub Kicinski wrote: > > > On Thu, 12 Nov 2020 13:13:13 -0800 Martin KaFai Lau wrote: > > > > This patch adds bpf_sk_storage_get_tracing_proto and > > > > bpf_sk_storage_delete_tracing_proto. They will check > > > > in runtime that the helpers can only be called when serving > > > > softirq or running in a task context. That should enable > > > > most common tracing use cases on sk. > > > > > > > + if (!in_serving_softirq() && !in_task()) > > > > > > This is a curious combination of checks. Would you mind indulging me > > > with an explanation? > > The current lock usage in bpf_local_storage.c is only expected to > > run in either of these contexts. > > :) > > Locks that can run in any context but preempt disabled or softirq > disabled? Not exactly. e.g. running from irq won't work. > > Let me cut to the chase. Are you sure you didn't mean to check > if (irq_count()) ? so, no. From preempt.h: /* * ... * in_interrupt() - We're in NMI,IRQ,SoftIRQ context or have BH disabled * ... */ #define in_interrupt() (irq_count())
On Mon, 16 Nov 2020 10:37:49 -0800 Martin KaFai Lau wrote: > On Mon, Nov 16, 2020 at 10:00:04AM -0800, Jakub Kicinski wrote: > > Locks that can run in any context but preempt disabled or softirq > > disabled? > Not exactly. e.g. running from irq won't work. > > > Let me cut to the chase. Are you sure you didn't mean to check > > if (irq_count()) ? > so, no. > > From preempt.h: > > /* > * ... > * in_interrupt() - We're in NMI,IRQ,SoftIRQ context or have BH disabled > * ... > */ > #define in_interrupt() (irq_count()) Right, as I said in my correction (in_irq() || in_nmi()). Just to spell it out AFAIU in_serving_softirq() will return true when softirq is active and interrupted by a hard irq or an NMI.
On Mon, Nov 16, 2020 at 10:43:40AM -0800, Jakub Kicinski wrote: > On Mon, 16 Nov 2020 10:37:49 -0800 Martin KaFai Lau wrote: > > On Mon, Nov 16, 2020 at 10:00:04AM -0800, Jakub Kicinski wrote: > > > Locks that can run in any context but preempt disabled or softirq > > > disabled? > > Not exactly. e.g. running from irq won't work. > > > > > Let me cut to the chase. Are you sure you didn't mean to check > > > if (irq_count()) ? > > so, no. > > > > From preempt.h: > > > > /* > > * ... > > * in_interrupt() - We're in NMI,IRQ,SoftIRQ context or have BH disabled > > * ... > > */ > > #define in_interrupt() (irq_count()) > > Right, as I said in my correction (in_irq() || in_nmi()). > > Just to spell it out AFAIU in_serving_softirq() will return true when > softirq is active and interrupted by a hard irq or an NMI. I see what you have been getting at now. Good point. will post a fix.
diff --git a/include/net/bpf_sk_storage.h b/include/net/bpf_sk_storage.h index 3c516dd07caf..0e85713f56df 100644 --- a/include/net/bpf_sk_storage.h +++ b/include/net/bpf_sk_storage.h @@ -20,6 +20,8 @@ void bpf_sk_storage_free(struct sock *sk); extern const struct bpf_func_proto bpf_sk_storage_get_proto; extern const struct bpf_func_proto bpf_sk_storage_delete_proto; +extern const struct bpf_func_proto bpf_sk_storage_get_tracing_proto; +extern const struct bpf_func_proto bpf_sk_storage_delete_tracing_proto; struct bpf_local_storage_elem; struct bpf_sk_storage_diag; diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index e4515b0f62a8..cfce60ad1cb5 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -16,6 +16,7 @@ #include <linux/syscalls.h> #include <linux/error-injection.h> #include <linux/btf_ids.h> +#include <net/bpf_sk_storage.h> #include <uapi/linux/bpf.h> #include <uapi/linux/btf.h> @@ -1735,6 +1736,10 @@ tracing_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) return &bpf_skc_to_tcp_request_sock_proto; case BPF_FUNC_skc_to_udp6_sock: return &bpf_skc_to_udp6_sock_proto; + case BPF_FUNC_sk_storage_get: + return &bpf_sk_storage_get_tracing_proto; + case BPF_FUNC_sk_storage_delete: + return &bpf_sk_storage_delete_tracing_proto; #endif case BPF_FUNC_seq_printf: return prog->expected_attach_type == BPF_TRACE_ITER ? diff --git a/net/core/bpf_sk_storage.c b/net/core/bpf_sk_storage.c index fd416678f236..359908a7d3c1 100644 --- a/net/core/bpf_sk_storage.c +++ b/net/core/bpf_sk_storage.c @@ -6,6 +6,7 @@ #include <linux/types.h> #include <linux/spinlock.h> #include <linux/bpf.h> +#include <linux/btf.h> #include <linux/btf_ids.h> #include <linux/bpf_local_storage.h> #include <net/bpf_sk_storage.h> @@ -378,6 +379,79 @@ const struct bpf_func_proto bpf_sk_storage_delete_proto = { .arg2_type = ARG_PTR_TO_BTF_ID_SOCK_COMMON, }; +static bool bpf_sk_storage_tracing_allowed(const struct bpf_prog *prog) +{ + const struct btf *btf_vmlinux; + const struct btf_type *t; + const char *tname; + u32 btf_id; + + if (prog->aux->dst_prog) + return false; + + /* Ensure the tracing program is not tracing + * any bpf_sk_storage*() function and also + * use the bpf_sk_storage_(get|delete) helper. + */ + switch (prog->expected_attach_type) { + case BPF_TRACE_RAW_TP: + /* bpf_sk_storage has no trace point */ + return true; + case BPF_TRACE_FENTRY: + case BPF_TRACE_FEXIT: + btf_vmlinux = bpf_get_btf_vmlinux(); + btf_id = prog->aux->attach_btf_id; + t = btf_type_by_id(btf_vmlinux, btf_id); + tname = btf_name_by_offset(btf_vmlinux, t->name_off); + return !!strncmp(tname, "bpf_sk_storage", + strlen("bpf_sk_storage")); + default: + return false; + } + + return false; +} + +BPF_CALL_4(bpf_sk_storage_get_tracing, struct bpf_map *, map, struct sock *, sk, + void *, value, u64, flags) +{ + if (!in_serving_softirq() && !in_task()) + return (unsigned long)NULL; + + return (unsigned long)____bpf_sk_storage_get(map, sk, value, flags); +} + +BPF_CALL_2(bpf_sk_storage_delete_tracing, struct bpf_map *, map, + struct sock *, sk) +{ + if (!in_serving_softirq() && !in_task()) + return -EPERM; + + return ____bpf_sk_storage_delete(map, sk); +} + +const struct bpf_func_proto bpf_sk_storage_get_tracing_proto = { + .func = bpf_sk_storage_get_tracing, + .gpl_only = false, + .ret_type = RET_PTR_TO_MAP_VALUE_OR_NULL, + .arg1_type = ARG_CONST_MAP_PTR, + .arg2_type = ARG_PTR_TO_BTF_ID, + .arg2_btf_id = &btf_sock_ids[BTF_SOCK_TYPE_SOCK_COMMON], + .arg3_type = ARG_PTR_TO_MAP_VALUE_OR_NULL, + .arg4_type = ARG_ANYTHING, + .allowed = bpf_sk_storage_tracing_allowed, +}; + +const struct bpf_func_proto bpf_sk_storage_delete_tracing_proto = { + .func = bpf_sk_storage_delete_tracing, + .gpl_only = false, + .ret_type = RET_INTEGER, + .arg1_type = ARG_CONST_MAP_PTR, + .arg2_type = ARG_PTR_TO_BTF_ID, + .arg2_btf_id = &btf_sock_ids[BTF_SOCK_TYPE_SOCK_COMMON], + .allowed = bpf_sk_storage_tracing_allowed, +}; + struct bpf_sk_storage_diag { u32 nr_maps; struct bpf_map *maps[];