diff mbox series

[v2,bpf-next,1/2] bpf: add verified_insns to bpf_prog_info and fdinfo

Message ID 20211011205415.234479-2-davemarchevsky@fb.com (mailing list archive)
State Superseded
Delegated to: BPF
Headers show
Series bpf: keep track of verifier insn_processed | expand

Checks

Context Check Description
bpf/vmtest-bpf-next-PR success PR summary
netdev/cover_letter success Series has a cover letter
netdev/fixes_present success Fixes tag not required for -next series
netdev/patch_count success Link
netdev/tree_selection success Clearly marked for bpf-next
netdev/subject_prefix success Link
netdev/cc_maintainers warning 8 maintainers not CCed: joe@cilium.io brouer@redhat.com songliubraving@fb.com yhs@fb.com liuhangbin@gmail.com jackmanb@google.com kafai@fb.com kpsingh@kernel.org
netdev/source_inline success Was 0 now: 0
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/module_param success Was 0 now: 0
netdev/build_32bit success Errors and warnings before: 11790 this patch: 11790
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/verify_fixes success No Fixes tag
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 56 lines checked
netdev/build_allmodconfig_warn success Errors and warnings before: 11421 this patch: 11421
netdev/header_inline success No static functions without inline keyword in header files
bpf/vmtest-bpf-next success VM_Test

Commit Message

Dave Marchevsky Oct. 11, 2021, 8:54 p.m. UTC
This stat is currently printed in the verifier log and not stored
anywhere. To ease consumption of this data, add a field to bpf_prog_aux
so it can be exposed via BPF_OBJ_GET_INFO_BY_FD and fdinfo.

Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
---
 include/linux/bpf.h            | 1 +
 include/uapi/linux/bpf.h       | 2 +-
 kernel/bpf/syscall.c           | 8 ++++++--
 kernel/bpf/verifier.c          | 1 +
 tools/include/uapi/linux/bpf.h | 2 +-
 5 files changed, 10 insertions(+), 4 deletions(-)

Comments

Andrii Nakryiko Oct. 18, 2021, 9:22 p.m. UTC | #1
On Mon, Oct 11, 2021 at 1:54 PM Dave Marchevsky <davemarchevsky@fb.com> wrote:
>
> This stat is currently printed in the verifier log and not stored
> anywhere. To ease consumption of this data, add a field to bpf_prog_aux
> so it can be exposed via BPF_OBJ_GET_INFO_BY_FD and fdinfo.
>
> Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
> ---
>  include/linux/bpf.h            | 1 +
>  include/uapi/linux/bpf.h       | 2 +-
>  kernel/bpf/syscall.c           | 8 ++++++--
>  kernel/bpf/verifier.c          | 1 +
>  tools/include/uapi/linux/bpf.h | 2 +-
>  5 files changed, 10 insertions(+), 4 deletions(-)
>

[...]

> diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
> index 6fc59d61937a..d053fc7e7995 100644
> --- a/tools/include/uapi/linux/bpf.h
> +++ b/tools/include/uapi/linux/bpf.h
> @@ -5591,7 +5591,7 @@ struct bpf_prog_info {
>         char name[BPF_OBJ_NAME_LEN];
>         __u32 ifindex;
>         __u32 gpl_compatible:1;
> -       __u32 :31; /* alignment pad */
> +       __u32 verified_insns:31;

These 31 unused bits seem like a good place to add extra generic
flags, not new counters. E.g., like a sleepable flag. So I wonder if
it's better to use a dedicated u32 field for counters like
verified_insns and keep these reserved fields for boolean flags?

Daniel, I know you proposed to reuse those 31 bits. How strong do you
feel about this? For any other kind of counter we seem to be using a
complete dedicated integer field, so it would be consistent to keep
doing that?

Having a sleepable bit still seems like a good idea, btw :) but it's a
separate change from Dave's.

>         __u64 netns_dev;
>         __u64 netns_ino;
>         __u32 nr_jited_ksyms;
> --
> 2.30.2
>
Dave Marchevsky Oct. 20, 2021, 8:03 a.m. UTC | #2
On 10/18/21 5:22 PM, Andrii Nakryiko wrote:   
> On Mon, Oct 11, 2021 at 1:54 PM Dave Marchevsky <davemarchevsky@fb.com> wrote:
>>
>> This stat is currently printed in the verifier log and not stored
>> anywhere. To ease consumption of this data, add a field to bpf_prog_aux
>> so it can be exposed via BPF_OBJ_GET_INFO_BY_FD and fdinfo.
>>
>> Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
>> ---
>>  include/linux/bpf.h            | 1 +
>>  include/uapi/linux/bpf.h       | 2 +-
>>  kernel/bpf/syscall.c           | 8 ++++++--
>>  kernel/bpf/verifier.c          | 1 +
>>  tools/include/uapi/linux/bpf.h | 2 +-
>>  5 files changed, 10 insertions(+), 4 deletions(-)
>>
> 
> [...]
> 
>> diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
>> index 6fc59d61937a..d053fc7e7995 100644
>> --- a/tools/include/uapi/linux/bpf.h
>> +++ b/tools/include/uapi/linux/bpf.h
>> @@ -5591,7 +5591,7 @@ struct bpf_prog_info {
>>         char name[BPF_OBJ_NAME_LEN];
>>         __u32 ifindex;
>>         __u32 gpl_compatible:1;
>> -       __u32 :31; /* alignment pad */
>> +       __u32 verified_insns:31;
> 
> These 31 unused bits seem like a good place to add extra generic
> flags, not new counters. E.g., like a sleepable flag. So I wonder if
> it's better to use a dedicated u32 field for counters like
> verified_insns and keep these reserved fields for boolean flags?
> 
> Daniel, I know you proposed to reuse those 31 bits. How strong do you
> feel about this? For any other kind of counter we seem to be using a
> complete dedicated integer field, so it would be consistent to keep
> doing that?
> 
> Having a sleepable bit still seems like a good idea, btw :) but it's a
> separate change from Dave's.

Re: use padding vs new field, I don't have a strong feeling either way,
but if there are proper flags that could use that space in the near 
future, this combined with consistency with other counters leans me 
towards adding a new field.

Also, when using the bitfield space, clang complains about types in 
selftest assert:

tools/testing/selftests/bpf/prog_tests/verif_stats.c:23:17: error: ‘typeof’ applied to a bit-field
   23 |  if (!ASSERT_GT(info.verified_insns, 0, "verified_insns"))
      |                 ^~~~
./test_progs.h:227:9: note: in definition of macro ‘ASSERT_GT’
  227 |  typeof(actual) ___act = (actual);    \

Which necessitated a __u32 cast in this version of the patchset. Don't think
it would cause issues outside of this specific selftest, but worth noting.

Anyways, sent a v3 of the patchset with 'new field' and other comments
addressed.
diff mbox series

Patch

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index d604c8251d88..c93fd845a758 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -887,6 +887,7 @@  struct bpf_prog_aux {
 	struct bpf_prog *prog;
 	struct user_struct *user;
 	u64 load_time; /* ns since boottime */
+	u32 verified_insns;
 	struct bpf_map *cgroup_storage[MAX_BPF_CGROUP_STORAGE_TYPE];
 	char name[BPF_OBJ_NAME_LEN];
 #ifdef CONFIG_SECURITY
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 6fc59d61937a..d053fc7e7995 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -5591,7 +5591,7 @@  struct bpf_prog_info {
 	char name[BPF_OBJ_NAME_LEN];
 	__u32 ifindex;
 	__u32 gpl_compatible:1;
-	__u32 :31; /* alignment pad */
+	__u32 verified_insns:31;
 	__u64 netns_dev;
 	__u64 netns_ino;
 	__u32 nr_jited_ksyms;
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 4e50c0bfdb7d..5beb321b3b3b 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -1848,7 +1848,8 @@  static void bpf_prog_show_fdinfo(struct seq_file *m, struct file *filp)
 		   "prog_id:\t%u\n"
 		   "run_time_ns:\t%llu\n"
 		   "run_cnt:\t%llu\n"
-		   "recursion_misses:\t%llu\n",
+		   "recursion_misses:\t%llu\n"
+		   "verified_insns:\t%u\n",
 		   prog->type,
 		   prog->jited,
 		   prog_tag,
@@ -1856,7 +1857,8 @@  static void bpf_prog_show_fdinfo(struct seq_file *m, struct file *filp)
 		   prog->aux->id,
 		   stats.nsecs,
 		   stats.cnt,
-		   stats.misses);
+		   stats.misses,
+		   prog->aux->verified_insns);
 }
 #endif
 
@@ -3625,6 +3627,8 @@  static int bpf_prog_get_info_by_fd(struct file *file,
 	info.run_cnt = stats.cnt;
 	info.recursion_misses = stats.misses;
 
+	info.verified_insns = prog->aux->verified_insns;
+
 	if (!bpf_capable()) {
 		info.jited_prog_len = 0;
 		info.xlated_prog_len = 0;
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 20900a1bac12..81c7eecdc5d5 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -14038,6 +14038,7 @@  int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr)
 
 	env->verification_time = ktime_get_ns() - start_time;
 	print_verification_stats(env);
+	env->prog->aux->verified_insns = env->insn_processed;
 
 	if (log->level && bpf_verifier_log_full(log))
 		ret = -ENOSPC;
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 6fc59d61937a..d053fc7e7995 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -5591,7 +5591,7 @@  struct bpf_prog_info {
 	char name[BPF_OBJ_NAME_LEN];
 	__u32 ifindex;
 	__u32 gpl_compatible:1;
-	__u32 :31; /* alignment pad */
+	__u32 verified_insns:31;
 	__u64 netns_dev;
 	__u64 netns_ino;
 	__u32 nr_jited_ksyms;