diff mbox series

[bpf-next,v2,3/3] bpf: Use BPF_KFUNC macro at all kfunc definitions

Message ID 20230123171506.71995-4-void@manifault.com (mailing list archive)
State Changes Requested
Delegated to: BPF
Headers show
Series Add BPF_KFUNC macro for kfunc definitions | expand

Checks

Context Check Description
netdev/tree_selection success Clearly marked for bpf-next
netdev/fixes_present success Fixes tag not required for -next series
netdev/subject_prefix success Link
netdev/cover_letter success Series has a cover letter
netdev/patch_count success Link
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 79 this patch: 52
netdev/cc_maintainers warning 31 maintainers not CCed: yhs@fb.com netfilter-devel@vger.kernel.org pabeni@redhat.com steffen.klassert@secunet.com linux-kselftest@vger.kernel.org linux-stm32@st-md-mailman.stormreply.com mcoquelin.stm32@gmail.com ebiederm@xmission.com rostedt@goodmis.org pablo@netfilter.org shuah@kernel.org coreteam@netfilter.org yoshfuji@linux-ipv6.org edumazet@google.com cgroups@vger.kernel.org lizefan.x@bytedance.com netdev@vger.kernel.org fw@strlen.de davem@davemloft.net alexandre.torgue@foss.st.com kexec@lists.infradead.org mhiramat@kernel.org hannes@cmpxchg.org herbert@gondor.apana.org.au tj@kernel.org kuba@kernel.org dsahern@kernel.org linux-trace-kernel@vger.kernel.org kadlec@netfilter.org linux-arm-kernel@lists.infradead.org mykolal@fb.com
netdev/build_clang fail Errors and warnings before: 1 this patch: 24
netdev/module_param success Was 0 now: 0
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 79 this patch: 52
netdev/checkpatch warning WARNING: line length of 84 exceeds 80 columns WARNING: line length of 87 exceeds 80 columns WARNING: line length of 89 exceeds 80 columns WARNING: line length of 90 exceeds 80 columns WARNING: line length of 91 exceeds 80 columns WARNING: line length of 93 exceeds 80 columns
netdev/kdoc fail Errors and warnings before: 0 this patch: 15
netdev/source_inline success Was 0 now: 0
bpf/vmtest-bpf-next-VM_Test-4 success Logs for build for s390x with gcc
bpf/vmtest-bpf-next-VM_Test-9 success Logs for test_maps on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-10 success Logs for test_maps on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-12 success Logs for test_maps on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-13 success Logs for test_maps on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-14 success Logs for test_progs on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-15 success Logs for test_progs on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-17 success Logs for test_progs on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-18 success Logs for test_progs on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-19 fail Logs for test_progs_no_alu32 on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-20 success Logs for test_progs_no_alu32 on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-22 success Logs for test_progs_no_alu32 on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-23 success Logs for test_progs_no_alu32 on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-24 success Logs for test_progs_no_alu32_parallel on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-25 success Logs for test_progs_no_alu32_parallel on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-26 success Logs for test_progs_no_alu32_parallel on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-27 success Logs for test_progs_no_alu32_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-28 success Logs for test_progs_no_alu32_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-29 success Logs for test_progs_parallel on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-30 success Logs for test_progs_parallel on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-32 success Logs for test_progs_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-33 success Logs for test_progs_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-34 success Logs for test_verifier on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-35 success Logs for test_verifier on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-37 success Logs for test_verifier on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-38 success Logs for test_verifier on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-21 fail Logs for test_progs_no_alu32 on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-36 success Logs for test_verifier on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-16 success Logs for test_progs on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-31 success Logs for test_progs_parallel on s390x with gcc
bpf/vmtest-bpf-next-PR fail PR summary
bpf/vmtest-bpf-next-VM_Test-11 success Logs for test_maps on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-1 success Logs for ShellCheck
bpf/vmtest-bpf-next-VM_Test-2 success Logs for build for aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-3 success Logs for build for aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-5 success Logs for build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-6 success Logs for build for x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-7 success Logs for llvm-toolchain
bpf/vmtest-bpf-next-VM_Test-8 success Logs for set-matrix

Commit Message

David Vernet Jan. 23, 2023, 5:15 p.m. UTC
Now that we have the BPF_KFUNC macro, we should use it to define all
existing kfuncs to ensure that they'll never be elided in LTO builds,
and so that we can remove the __diag blocks around them.

Suggested-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: David Vernet <void@manifault.com>
---
 kernel/bpf/helpers.c                          | 44 ++++++-------
 kernel/cgroup/rstat.c                         |  4 +-
 kernel/kexec_core.c                           |  3 +-
 kernel/trace/bpf_trace.c                      | 18 ++----
 net/bpf/test_run.c                            | 64 +++++++++----------
 net/ipv4/tcp_bbr.c                            | 16 ++---
 net/ipv4/tcp_cong.c                           | 10 +--
 net/ipv4/tcp_cubic.c                          | 12 ++--
 net/ipv4/tcp_dctcp.c                          | 12 ++--
 net/netfilter/nf_conntrack_bpf.c              | 34 ++++------
 net/netfilter/nf_nat_bpf.c                    | 12 +---
 net/xfrm/xfrm_interface_bpf.c                 | 14 +---
 .../selftests/bpf/bpf_testmod/bpf_testmod.c   |  3 +-
 13 files changed, 105 insertions(+), 141 deletions(-)

Comments

Alexei Starovoitov Jan. 23, 2023, 6:33 p.m. UTC | #1
On Mon, Jan 23, 2023 at 11:15:06AM -0600, David Vernet wrote:
> -void *bpf_obj_new_impl(u64 local_type_id__k, void *meta__ign)
> +BPF_KFUNC(void *bpf_obj_new_impl(u64 local_type_id__k, void *meta__ign))
>  {
>  	struct btf_struct_meta *meta = meta__ign;
>  	u64 size = local_type_id__k;
> @@ -1790,7 +1786,7 @@ void *bpf_obj_new_impl(u64 local_type_id__k, void *meta__ign)
>  	return p;
>  }
>  
> -void bpf_obj_drop_impl(void *p__alloc, void *meta__ign)
> +BPF_KFUNC(void bpf_obj_drop_impl(void *p__alloc, void *meta__ign))
>  {

The following also works:
-BPF_KFUNC(void *bpf_obj_new_impl(u64 local_type_id__k, void *meta__ign))
+BPF_KFUNC(
+void *bpf_obj_new_impl(u64 local_type_id__k, void *meta__ign)
+)

and it looks little bit cleaner to me.

git grep -A1 BPF_KFUNC
can still find all instances of kfuncs.

wdyt?
David Vernet Jan. 23, 2023, 6:48 p.m. UTC | #2
On Mon, Jan 23, 2023 at 10:33:05AM -0800, Alexei Starovoitov wrote:
> On Mon, Jan 23, 2023 at 11:15:06AM -0600, David Vernet wrote:
> > -void *bpf_obj_new_impl(u64 local_type_id__k, void *meta__ign)
> > +BPF_KFUNC(void *bpf_obj_new_impl(u64 local_type_id__k, void *meta__ign))
> >  {
> >  	struct btf_struct_meta *meta = meta__ign;
> >  	u64 size = local_type_id__k;
> > @@ -1790,7 +1786,7 @@ void *bpf_obj_new_impl(u64 local_type_id__k, void *meta__ign)
> >  	return p;
> >  }
> >  
> > -void bpf_obj_drop_impl(void *p__alloc, void *meta__ign)
> > +BPF_KFUNC(void bpf_obj_drop_impl(void *p__alloc, void *meta__ign))
> >  {
> 
> The following also works:
> -BPF_KFUNC(void *bpf_obj_new_impl(u64 local_type_id__k, void *meta__ign))
> +BPF_KFUNC(
> +void *bpf_obj_new_impl(u64 local_type_id__k, void *meta__ign)
> +)
> 
> and it looks little bit cleaner to me.
> 
> git grep -A1 BPF_KFUNC
> can still find all instances of kfuncs.
> 
> wdyt?

I'm fine with putting it on its own line if that's your preference.
Agreed that it might be a bit cleaner, especially for functions with the
return type on its own line, so we'd have e.g.:

BPF_KFUNC(
struct nf_conn *
bpf_skb_ct_lookup(struct __sk_buff *skb_ctx, struct bpf_sock_tuple *bpf_tuple,
		  u32 tuple__sz, struct bpf_ct_opts *opts, u32 opts__sz)
) {

// ...

}

Note the presence of the { on the closing paren. Are you ok with that?
Otherwise I think it will look a bit odd:

BPF_KFUNC(
struct nf_conn *
bpf_skb_ct_lookup(struct __sk_buff *skb_ctx, struct bpf_sock_tuple *bpf_tuple,
		  u32 tuple__sz, struct bpf_ct_opts *opts, u32 opts__sz)
)
{

}

Thanks,
David
Alexei Starovoitov Jan. 23, 2023, 6:54 p.m. UTC | #3
On Mon, Jan 23, 2023 at 12:48:27PM -0600, David Vernet wrote:
> On Mon, Jan 23, 2023 at 10:33:05AM -0800, Alexei Starovoitov wrote:
> > On Mon, Jan 23, 2023 at 11:15:06AM -0600, David Vernet wrote:
> > > -void *bpf_obj_new_impl(u64 local_type_id__k, void *meta__ign)
> > > +BPF_KFUNC(void *bpf_obj_new_impl(u64 local_type_id__k, void *meta__ign))
> > >  {
> > >  	struct btf_struct_meta *meta = meta__ign;
> > >  	u64 size = local_type_id__k;
> > > @@ -1790,7 +1786,7 @@ void *bpf_obj_new_impl(u64 local_type_id__k, void *meta__ign)
> > >  	return p;
> > >  }
> > >  
> > > -void bpf_obj_drop_impl(void *p__alloc, void *meta__ign)
> > > +BPF_KFUNC(void bpf_obj_drop_impl(void *p__alloc, void *meta__ign))
> > >  {
> > 
> > The following also works:
> > -BPF_KFUNC(void *bpf_obj_new_impl(u64 local_type_id__k, void *meta__ign))
> > +BPF_KFUNC(
> > +void *bpf_obj_new_impl(u64 local_type_id__k, void *meta__ign)
> > +)
> > 
> > and it looks little bit cleaner to me.
> > 
> > git grep -A1 BPF_KFUNC
> > can still find all instances of kfuncs.
> > 
> > wdyt?
> 
> I'm fine with putting it on its own line if that's your preference.
> Agreed that it might be a bit cleaner, especially for functions with the
> return type on its own line, so we'd have e.g.:
> 
> BPF_KFUNC(
> struct nf_conn *
> bpf_skb_ct_lookup(struct __sk_buff *skb_ctx, struct bpf_sock_tuple *bpf_tuple,
> 		  u32 tuple__sz, struct bpf_ct_opts *opts, u32 opts__sz)

Yeah. Especially for those.

> ) {
> 
> // ...
> 
> }
> 
> Note the presence of the { on the closing paren. Are you ok with that?
> Otherwise I think it will look a bit odd:

Yep. Good idea. Either ){ or ) { look good to me.

> BPF_KFUNC(
> struct nf_conn *
> bpf_skb_ct_lookup(struct __sk_buff *skb_ctx, struct bpf_sock_tuple *bpf_tuple,
> 		  u32 tuple__sz, struct bpf_ct_opts *opts, u32 opts__sz)
> )
> {
> 
> }
> 
> Thanks,
> David
David Vernet Jan. 23, 2023, 7:01 p.m. UTC | #4
On Mon, Jan 23, 2023 at 10:54:34AM -0800, Alexei Starovoitov wrote:
> On Mon, Jan 23, 2023 at 12:48:27PM -0600, David Vernet wrote:
> > On Mon, Jan 23, 2023 at 10:33:05AM -0800, Alexei Starovoitov wrote:
> > > On Mon, Jan 23, 2023 at 11:15:06AM -0600, David Vernet wrote:
> > > > -void *bpf_obj_new_impl(u64 local_type_id__k, void *meta__ign)
> > > > +BPF_KFUNC(void *bpf_obj_new_impl(u64 local_type_id__k, void *meta__ign))
> > > >  {
> > > >  	struct btf_struct_meta *meta = meta__ign;
> > > >  	u64 size = local_type_id__k;
> > > > @@ -1790,7 +1786,7 @@ void *bpf_obj_new_impl(u64 local_type_id__k, void *meta__ign)
> > > >  	return p;
> > > >  }
> > > >  
> > > > -void bpf_obj_drop_impl(void *p__alloc, void *meta__ign)
> > > > +BPF_KFUNC(void bpf_obj_drop_impl(void *p__alloc, void *meta__ign))
> > > >  {
> > > 
> > > The following also works:
> > > -BPF_KFUNC(void *bpf_obj_new_impl(u64 local_type_id__k, void *meta__ign))
> > > +BPF_KFUNC(
> > > +void *bpf_obj_new_impl(u64 local_type_id__k, void *meta__ign)
> > > +)
> > > 
> > > and it looks little bit cleaner to me.
> > > 
> > > git grep -A1 BPF_KFUNC
> > > can still find all instances of kfuncs.
> > > 
> > > wdyt?
> > 
> > I'm fine with putting it on its own line if that's your preference.
> > Agreed that it might be a bit cleaner, especially for functions with the
> > return type on its own line, so we'd have e.g.:
> > 
> > BPF_KFUNC(
> > struct nf_conn *
> > bpf_skb_ct_lookup(struct __sk_buff *skb_ctx, struct bpf_sock_tuple *bpf_tuple,
> > 		  u32 tuple__sz, struct bpf_ct_opts *opts, u32 opts__sz)
> 
> Yeah. Especially for those.
> 
> > ) {
> > 
> > // ...
> > 
> > }
> > 
> > Note the presence of the { on the closing paren. Are you ok with that?
> > Otherwise I think it will look a bit odd:
> 
> Yep. Good idea. Either ){ or ) { look good to me.

Ack, will send v3 with that change later today, along with anything else
if someone else reviews.

> 
> > BPF_KFUNC(
> > struct nf_conn *
> > bpf_skb_ct_lookup(struct __sk_buff *skb_ctx, struct bpf_sock_tuple *bpf_tuple,
> > 		  u32 tuple__sz, struct bpf_ct_opts *opts, u32 opts__sz)
> > )
> > {
> > 
> > }
> > 
> > Thanks,
> > David
kernel test robot Jan. 23, 2023, 7:01 p.m. UTC | #5
Hi David,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on bpf-next/master]

url:    https://github.com/intel-lab-lkp/linux/commits/David-Vernet/bpf-Add-BPF_KFUNC-macro-for-defining-kfuncs/20230124-011804
base:   https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
patch link:    https://lore.kernel.org/r/20230123171506.71995-4-void%40manifault.com
patch subject: [PATCH bpf-next v2 3/3] bpf: Use BPF_KFUNC macro at all kfunc definitions
config: m68k-allyesconfig (https://download.01.org/0day-ci/archive/20230124/202301240220.RKx4Wgip-lkp@intel.com/config)
compiler: m68k-linux-gcc (GCC) 12.1.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/760b15a8e5d45d6e9925d2439e0d052de969b361
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review David-Vernet/bpf-Add-BPF_KFUNC-macro-for-defining-kfuncs/20230124-011804
        git checkout 760b15a8e5d45d6e9925d2439e0d052de969b361
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=m68k olddefconfig
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=m68k SHELL=/bin/bash kernel/ net/

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

>> kernel/cgroup/rstat.c:30: warning: expecting prototype for cgroup_rstat_updated(). Prototype was for BPF_KFUNC() instead
>> kernel/cgroup/rstat.c:235: warning: expecting prototype for cgroup_rstat_flush(). Prototype was for BPF_KFUNC() instead
--
>> kernel/bpf/helpers.c:1850: warning: expecting prototype for bpf_task_acquire(). Prototype was for BPF_KFUNC() instead
>> kernel/bpf/helpers.c:1861: warning: expecting prototype for bpf_task_acquire_not_zero(). Prototype was for BPF_KFUNC() instead
>> kernel/bpf/helpers.c:1913: warning: expecting prototype for bpf_task_kptr_get(). Prototype was for BPF_KFUNC() instead
>> kernel/bpf/helpers.c:1926: warning: expecting prototype for bpf_task_release(). Prototype was for BPF_KFUNC() instead
>> kernel/bpf/helpers.c:1941: warning: expecting prototype for bpf_cgroup_acquire(). Prototype was for BPF_KFUNC() instead
>> kernel/bpf/helpers.c:1953: warning: expecting prototype for bpf_cgroup_kptr_get(). Prototype was for BPF_KFUNC() instead
>> kernel/bpf/helpers.c:1985: warning: expecting prototype for bpf_cgroup_release(). Prototype was for BPF_KFUNC() instead
>> kernel/bpf/helpers.c:2000: warning: expecting prototype for bpf_cgroup_ancestor(). Prototype was for BPF_KFUNC() instead
>> kernel/bpf/helpers.c:2019: warning: expecting prototype for bpf_task_from_pid(). Prototype was for BPF_KFUNC() instead
--
   net/bpf/test_run.c:490:14: warning: no previous prototype for 'bpf_fentry_test2' [-Wmissing-prototypes]
     490 | int noinline bpf_fentry_test2(int a, u64 b)
         |              ^~~~~~~~~~~~~~~~
   net/bpf/test_run.c:495:14: warning: no previous prototype for 'bpf_fentry_test3' [-Wmissing-prototypes]
     495 | int noinline bpf_fentry_test3(char a, int b, u64 c)
         |              ^~~~~~~~~~~~~~~~
   net/bpf/test_run.c:500:14: warning: no previous prototype for 'bpf_fentry_test4' [-Wmissing-prototypes]
     500 | int noinline bpf_fentry_test4(void *a, char b, int c, u64 d)
         |              ^~~~~~~~~~~~~~~~
   net/bpf/test_run.c:505:14: warning: no previous prototype for 'bpf_fentry_test5' [-Wmissing-prototypes]
     505 | int noinline bpf_fentry_test5(u64 a, void *b, short c, int d, u64 e)
         |              ^~~~~~~~~~~~~~~~
   net/bpf/test_run.c:510:14: warning: no previous prototype for 'bpf_fentry_test6' [-Wmissing-prototypes]
     510 | int noinline bpf_fentry_test6(u64 a, void *b, short c, int d, void *e, u64 f)
         |              ^~~~~~~~~~~~~~~~
>> net/bpf/test_run.c:519:14: warning: no previous prototype for 'bpf_fentry_test7' [-Wmissing-prototypes]
     519 | int noinline bpf_fentry_test7(struct bpf_fentry_test_t *arg)
         |              ^~~~~~~~~~~~~~~~
>> net/bpf/test_run.c:524:14: warning: no previous prototype for 'bpf_fentry_test8' [-Wmissing-prototypes]
     524 | int noinline bpf_fentry_test8(struct bpf_fentry_test_t *arg)
         |              ^~~~~~~~~~~~~~~~


vim +30 kernel/cgroup/rstat.c

041cd640b2f3c5 kernel/cgroup/stat.c  Tejun Heo       2017-09-25  19  
041cd640b2f3c5 kernel/cgroup/stat.c  Tejun Heo       2017-09-25  20  /**
6162cef0f741c7 kernel/cgroup/rstat.c Tejun Heo       2018-04-26  21   * cgroup_rstat_updated - keep track of updated rstat_cpu
041cd640b2f3c5 kernel/cgroup/stat.c  Tejun Heo       2017-09-25  22   * @cgrp: target cgroup
c58632b3631cb2 kernel/cgroup/rstat.c Tejun Heo       2018-04-26  23   * @cpu: cpu on which rstat_cpu was updated
041cd640b2f3c5 kernel/cgroup/stat.c  Tejun Heo       2017-09-25  24   *
c58632b3631cb2 kernel/cgroup/rstat.c Tejun Heo       2018-04-26  25   * @cgrp's rstat_cpu on @cpu was updated.  Put it on the parent's matching
c58632b3631cb2 kernel/cgroup/rstat.c Tejun Heo       2018-04-26  26   * rstat_cpu->updated_children list.  See the comment on top of
c58632b3631cb2 kernel/cgroup/rstat.c Tejun Heo       2018-04-26  27   * cgroup_rstat_cpu definition for details.
041cd640b2f3c5 kernel/cgroup/stat.c  Tejun Heo       2017-09-25  28   */
760b15a8e5d45d kernel/cgroup/rstat.c David Vernet    2023-01-23  29  BPF_KFUNC(void cgroup_rstat_updated(struct cgroup *cgrp, int cpu))
041cd640b2f3c5 kernel/cgroup/stat.c  Tejun Heo       2017-09-25 @30  {
c58632b3631cb2 kernel/cgroup/rstat.c Tejun Heo       2018-04-26  31  	raw_spinlock_t *cpu_lock = per_cpu_ptr(&cgroup_rstat_cpu_lock, cpu);
041cd640b2f3c5 kernel/cgroup/stat.c  Tejun Heo       2017-09-25  32  	unsigned long flags;
041cd640b2f3c5 kernel/cgroup/stat.c  Tejun Heo       2017-09-25  33  
041cd640b2f3c5 kernel/cgroup/stat.c  Tejun Heo       2017-09-25  34  	/*
d8ef4b38cb69d9 kernel/cgroup/rstat.c Tejun Heo       2020-04-09  35  	 * Speculative already-on-list test. This may race leading to
d8ef4b38cb69d9 kernel/cgroup/rstat.c Tejun Heo       2020-04-09  36  	 * temporary inaccuracies, which is fine.
d8ef4b38cb69d9 kernel/cgroup/rstat.c Tejun Heo       2020-04-09  37  	 *
041cd640b2f3c5 kernel/cgroup/stat.c  Tejun Heo       2017-09-25  38  	 * Because @parent's updated_children is terminated with @parent
041cd640b2f3c5 kernel/cgroup/stat.c  Tejun Heo       2017-09-25  39  	 * instead of NULL, we can tell whether @cgrp is on the list by
041cd640b2f3c5 kernel/cgroup/stat.c  Tejun Heo       2017-09-25  40  	 * testing the next pointer for NULL.
041cd640b2f3c5 kernel/cgroup/stat.c  Tejun Heo       2017-09-25  41  	 */
eda09706b240ca kernel/cgroup/rstat.c Michal Koutný   2021-11-03  42  	if (data_race(cgroup_rstat_cpu(cgrp, cpu)->updated_next))
041cd640b2f3c5 kernel/cgroup/stat.c  Tejun Heo       2017-09-25  43  		return;
041cd640b2f3c5 kernel/cgroup/stat.c  Tejun Heo       2017-09-25  44  
041cd640b2f3c5 kernel/cgroup/stat.c  Tejun Heo       2017-09-25  45  	raw_spin_lock_irqsave(cpu_lock, flags);
041cd640b2f3c5 kernel/cgroup/stat.c  Tejun Heo       2017-09-25  46  
041cd640b2f3c5 kernel/cgroup/stat.c  Tejun Heo       2017-09-25  47  	/* put @cgrp and all ancestors on the corresponding updated lists */
dc26532aed0ab2 kernel/cgroup/rstat.c Johannes Weiner 2021-04-29  48  	while (true) {
c58632b3631cb2 kernel/cgroup/rstat.c Tejun Heo       2018-04-26  49  		struct cgroup_rstat_cpu *rstatc = cgroup_rstat_cpu(cgrp, cpu);
dc26532aed0ab2 kernel/cgroup/rstat.c Johannes Weiner 2021-04-29  50  		struct cgroup *parent = cgroup_parent(cgrp);
dc26532aed0ab2 kernel/cgroup/rstat.c Johannes Weiner 2021-04-29  51  		struct cgroup_rstat_cpu *prstatc;
041cd640b2f3c5 kernel/cgroup/stat.c  Tejun Heo       2017-09-25  52  
041cd640b2f3c5 kernel/cgroup/stat.c  Tejun Heo       2017-09-25  53  		/*
041cd640b2f3c5 kernel/cgroup/stat.c  Tejun Heo       2017-09-25  54  		 * Both additions and removals are bottom-up.  If a cgroup
041cd640b2f3c5 kernel/cgroup/stat.c  Tejun Heo       2017-09-25  55  		 * is already in the tree, all ancestors are.
041cd640b2f3c5 kernel/cgroup/stat.c  Tejun Heo       2017-09-25  56  		 */
c58632b3631cb2 kernel/cgroup/rstat.c Tejun Heo       2018-04-26  57  		if (rstatc->updated_next)
041cd640b2f3c5 kernel/cgroup/stat.c  Tejun Heo       2017-09-25  58  			break;
041cd640b2f3c5 kernel/cgroup/stat.c  Tejun Heo       2017-09-25  59  
dc26532aed0ab2 kernel/cgroup/rstat.c Johannes Weiner 2021-04-29  60  		/* Root has no parent to link it to, but mark it busy */
dc26532aed0ab2 kernel/cgroup/rstat.c Johannes Weiner 2021-04-29  61  		if (!parent) {
dc26532aed0ab2 kernel/cgroup/rstat.c Johannes Weiner 2021-04-29  62  			rstatc->updated_next = cgrp;
dc26532aed0ab2 kernel/cgroup/rstat.c Johannes Weiner 2021-04-29  63  			break;
dc26532aed0ab2 kernel/cgroup/rstat.c Johannes Weiner 2021-04-29  64  		}
dc26532aed0ab2 kernel/cgroup/rstat.c Johannes Weiner 2021-04-29  65  
dc26532aed0ab2 kernel/cgroup/rstat.c Johannes Weiner 2021-04-29  66  		prstatc = cgroup_rstat_cpu(parent, cpu);
c58632b3631cb2 kernel/cgroup/rstat.c Tejun Heo       2018-04-26  67  		rstatc->updated_next = prstatc->updated_children;
c58632b3631cb2 kernel/cgroup/rstat.c Tejun Heo       2018-04-26  68  		prstatc->updated_children = cgrp;
dc26532aed0ab2 kernel/cgroup/rstat.c Johannes Weiner 2021-04-29  69  
dc26532aed0ab2 kernel/cgroup/rstat.c Johannes Weiner 2021-04-29  70  		cgrp = parent;
041cd640b2f3c5 kernel/cgroup/stat.c  Tejun Heo       2017-09-25  71  	}
041cd640b2f3c5 kernel/cgroup/stat.c  Tejun Heo       2017-09-25  72  
041cd640b2f3c5 kernel/cgroup/stat.c  Tejun Heo       2017-09-25  73  	raw_spin_unlock_irqrestore(cpu_lock, flags);
041cd640b2f3c5 kernel/cgroup/stat.c  Tejun Heo       2017-09-25  74  }
041cd640b2f3c5 kernel/cgroup/stat.c  Tejun Heo       2017-09-25  75
Daniel Borkmann Jan. 23, 2023, 7:04 p.m. UTC | #6
On 1/23/23 7:54 PM, Alexei Starovoitov wrote:
> On Mon, Jan 23, 2023 at 12:48:27PM -0600, David Vernet wrote:
>> On Mon, Jan 23, 2023 at 10:33:05AM -0800, Alexei Starovoitov wrote:
>>> On Mon, Jan 23, 2023 at 11:15:06AM -0600, David Vernet wrote:
>>>> -void *bpf_obj_new_impl(u64 local_type_id__k, void *meta__ign)
>>>> +BPF_KFUNC(void *bpf_obj_new_impl(u64 local_type_id__k, void *meta__ign))
>>>>   {
>>>>   	struct btf_struct_meta *meta = meta__ign;
>>>>   	u64 size = local_type_id__k;
>>>> @@ -1790,7 +1786,7 @@ void *bpf_obj_new_impl(u64 local_type_id__k, void *meta__ign)
>>>>   	return p;
>>>>   }
>>>>   
>>>> -void bpf_obj_drop_impl(void *p__alloc, void *meta__ign)
>>>> +BPF_KFUNC(void bpf_obj_drop_impl(void *p__alloc, void *meta__ign))
>>>>   {
>>>
>>> The following also works:
>>> -BPF_KFUNC(void *bpf_obj_new_impl(u64 local_type_id__k, void *meta__ign))
>>> +BPF_KFUNC(
>>> +void *bpf_obj_new_impl(u64 local_type_id__k, void *meta__ign)
>>> +)
>>>
>>> and it looks little bit cleaner to me.
>>>
>>> git grep -A1 BPF_KFUNC
>>> can still find all instances of kfuncs.
>>>
>>> wdyt?
>>
>> I'm fine with putting it on its own line if that's your preference.
>> Agreed that it might be a bit cleaner, especially for functions with the
>> return type on its own line, so we'd have e.g.:
>>
>> BPF_KFUNC(
>> struct nf_conn *
>> bpf_skb_ct_lookup(struct __sk_buff *skb_ctx, struct bpf_sock_tuple *bpf_tuple,
>> 		  u32 tuple__sz, struct bpf_ct_opts *opts, u32 opts__sz)
> 
> Yeah. Especially for those.
> 
>> ) {
>>
>> // ...
>>
>> }
>>
>> Note the presence of the { on the closing paren. Are you ok with that?
>> Otherwise I think it will look a bit odd:
> 
> Yep. Good idea. Either ){ or ) { look good to me.
> 
>> BPF_KFUNC(
>> struct nf_conn *
>> bpf_skb_ct_lookup(struct __sk_buff *skb_ctx, struct bpf_sock_tuple *bpf_tuple,
>> 		  u32 tuple__sz, struct bpf_ct_opts *opts, u32 opts__sz)
>> )
>> {
>>
>> }

Did you look into making this similar to the EXPORT_SYMBOL() infra? If possible
that would look much more natural to developers, e.g. :

struct nf_conn *
bpf_skb_ct_lookup(struct __sk_buff *skb_ctx, struct bpf_sock_tuple *bpf_tuple,
  		  u32 tuple__sz, struct bpf_ct_opts *opts, u32 opts__sz)
{
	[...]
}

EXPORT_BPF_KFUNC(bpf_skb_ct_lookup);
kernel test robot Jan. 23, 2023, 7:12 p.m. UTC | #7
Hi David,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on bpf-next/master]

url:    https://github.com/intel-lab-lkp/linux/commits/David-Vernet/bpf-Add-BPF_KFUNC-macro-for-defining-kfuncs/20230124-011804
base:   https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
patch link:    https://lore.kernel.org/r/20230123171506.71995-4-void%40manifault.com
patch subject: [PATCH bpf-next v2 3/3] bpf: Use BPF_KFUNC macro at all kfunc definitions
config: powerpc-allyesconfig (https://download.01.org/0day-ci/archive/20230124/202301240259.xwHsyJl4-lkp@intel.com/config)
compiler: powerpc-linux-gcc (GCC) 12.1.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/760b15a8e5d45d6e9925d2439e0d052de969b361
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review David-Vernet/bpf-Add-BPF_KFUNC-macro-for-defining-kfuncs/20230124-011804
        git checkout 760b15a8e5d45d6e9925d2439e0d052de969b361
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=powerpc olddefconfig
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=powerpc SHELL=/bin/bash kernel/

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

>> kernel/trace/bpf_trace.c:1233: warning: expecting prototype for bpf_lookup_user_key(). Prototype was for BPF_KFUNC() instead
>> kernel/trace/bpf_trace.c:1282: warning: expecting prototype for bpf_lookup_system_key(). Prototype was for BPF_KFUNC() instead
>> kernel/trace/bpf_trace.c:1306: warning: expecting prototype for bpf_key_put(). Prototype was for BPF_KFUNC() instead
>> kernel/trace/bpf_trace.c:1328: warning: expecting prototype for bpf_verify_pkcs7_signature(). Prototype was for BPF_KFUNC() instead


vim +1233 kernel/trace/bpf_trace.c

f92c1e183604c2 Jiri Olsa     2021-12-08  1205  
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1206  #ifdef CONFIG_KEYS
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1207  /**
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1208   * bpf_lookup_user_key - lookup a key by its serial
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1209   * @serial: key handle serial number
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1210   * @flags: lookup-specific flags
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1211   *
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1212   * Search a key with a given *serial* and the provided *flags*.
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1213   * If found, increment the reference count of the key by one, and
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1214   * return it in the bpf_key structure.
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1215   *
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1216   * The bpf_key structure must be passed to bpf_key_put() when done
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1217   * with it, so that the key reference count is decremented and the
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1218   * bpf_key structure is freed.
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1219   *
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1220   * Permission checks are deferred to the time the key is used by
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1221   * one of the available key-specific kfuncs.
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1222   *
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1223   * Set *flags* with KEY_LOOKUP_CREATE, to attempt creating a requested
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1224   * special keyring (e.g. session keyring), if it doesn't yet exist.
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1225   * Set *flags* with KEY_LOOKUP_PARTIAL, to lookup a key without waiting
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1226   * for the key construction, and to retrieve uninstantiated keys (keys
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1227   * without data attached to them).
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1228   *
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1229   * Return: a bpf_key pointer with a valid key pointer if the key is found, a
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1230   *         NULL pointer otherwise.
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1231   */
760b15a8e5d45d David Vernet  2023-01-23  1232  BPF_KFUNC(struct bpf_key *bpf_lookup_user_key(u32 serial, u64 flags))
f3cf4134c5c6c4 Roberto Sassu 2022-09-20 @1233  {
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1234  	key_ref_t key_ref;
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1235  	struct bpf_key *bkey;
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1236  
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1237  	if (flags & ~KEY_LOOKUP_ALL)
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1238  		return NULL;
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1239  
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1240  	/*
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1241  	 * Permission check is deferred until the key is used, as the
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1242  	 * intent of the caller is unknown here.
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1243  	 */
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1244  	key_ref = lookup_user_key(serial, flags, KEY_DEFER_PERM_CHECK);
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1245  	if (IS_ERR(key_ref))
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1246  		return NULL;
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1247  
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1248  	bkey = kmalloc(sizeof(*bkey), GFP_KERNEL);
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1249  	if (!bkey) {
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1250  		key_put(key_ref_to_ptr(key_ref));
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1251  		return NULL;
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1252  	}
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1253  
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1254  	bkey->key = key_ref_to_ptr(key_ref);
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1255  	bkey->has_ref = true;
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1256  
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1257  	return bkey;
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1258  }
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1259  
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1260  /**
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1261   * bpf_lookup_system_key - lookup a key by a system-defined ID
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1262   * @id: key ID
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1263   *
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1264   * Obtain a bpf_key structure with a key pointer set to the passed key ID.
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1265   * The key pointer is marked as invalid, to prevent bpf_key_put() from
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1266   * attempting to decrement the key reference count on that pointer. The key
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1267   * pointer set in such way is currently understood only by
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1268   * verify_pkcs7_signature().
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1269   *
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1270   * Set *id* to one of the values defined in include/linux/verification.h:
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1271   * 0 for the primary keyring (immutable keyring of system keys);
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1272   * VERIFY_USE_SECONDARY_KEYRING for both the primary and secondary keyring
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1273   * (where keys can be added only if they are vouched for by existing keys
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1274   * in those keyrings); VERIFY_USE_PLATFORM_KEYRING for the platform
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1275   * keyring (primarily used by the integrity subsystem to verify a kexec'ed
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1276   * kerned image and, possibly, the initramfs signature).
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1277   *
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1278   * Return: a bpf_key pointer with an invalid key pointer set from the
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1279   *         pre-determined ID on success, a NULL pointer otherwise
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1280   */
760b15a8e5d45d David Vernet  2023-01-23  1281  BPF_KFUNC(struct bpf_key *bpf_lookup_system_key(u64 id))
f3cf4134c5c6c4 Roberto Sassu 2022-09-20 @1282  {
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1283  	struct bpf_key *bkey;
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1284  
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1285  	if (system_keyring_id_check(id) < 0)
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1286  		return NULL;
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1287  
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1288  	bkey = kmalloc(sizeof(*bkey), GFP_ATOMIC);
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1289  	if (!bkey)
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1290  		return NULL;
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1291  
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1292  	bkey->key = (struct key *)(unsigned long)id;
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1293  	bkey->has_ref = false;
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1294  
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1295  	return bkey;
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1296  }
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1297  
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1298  /**
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1299   * bpf_key_put - decrement key reference count if key is valid and free bpf_key
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1300   * @bkey: bpf_key structure
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1301   *
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1302   * Decrement the reference count of the key inside *bkey*, if the pointer
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1303   * is valid, and free *bkey*.
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1304   */
760b15a8e5d45d David Vernet  2023-01-23  1305  BPF_KFUNC(void bpf_key_put(struct bpf_key *bkey))
f3cf4134c5c6c4 Roberto Sassu 2022-09-20 @1306  {
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1307  	if (bkey->has_ref)
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1308  		key_put(bkey->key);
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1309  
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1310  	kfree(bkey);
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1311  }
f3cf4134c5c6c4 Roberto Sassu 2022-09-20  1312  
865b0566d8f1a0 Roberto Sassu 2022-09-20  1313  #ifdef CONFIG_SYSTEM_DATA_VERIFICATION
865b0566d8f1a0 Roberto Sassu 2022-09-20  1314  /**
865b0566d8f1a0 Roberto Sassu 2022-09-20  1315   * bpf_verify_pkcs7_signature - verify a PKCS#7 signature
865b0566d8f1a0 Roberto Sassu 2022-09-20  1316   * @data_ptr: data to verify
865b0566d8f1a0 Roberto Sassu 2022-09-20  1317   * @sig_ptr: signature of the data
865b0566d8f1a0 Roberto Sassu 2022-09-20  1318   * @trusted_keyring: keyring with keys trusted for signature verification
865b0566d8f1a0 Roberto Sassu 2022-09-20  1319   *
865b0566d8f1a0 Roberto Sassu 2022-09-20  1320   * Verify the PKCS#7 signature *sig_ptr* against the supplied *data_ptr*
865b0566d8f1a0 Roberto Sassu 2022-09-20  1321   * with keys in a keyring referenced by *trusted_keyring*.
865b0566d8f1a0 Roberto Sassu 2022-09-20  1322   *
865b0566d8f1a0 Roberto Sassu 2022-09-20  1323   * Return: 0 on success, a negative value on error.
865b0566d8f1a0 Roberto Sassu 2022-09-20  1324   */
760b15a8e5d45d David Vernet  2023-01-23  1325  BPF_KFUNC(int bpf_verify_pkcs7_signature(struct bpf_dynptr_kern *data_ptr,
865b0566d8f1a0 Roberto Sassu 2022-09-20  1326  					 struct bpf_dynptr_kern *sig_ptr,
760b15a8e5d45d David Vernet  2023-01-23  1327  					 struct bpf_key *trusted_keyring))
865b0566d8f1a0 Roberto Sassu 2022-09-20 @1328  {
865b0566d8f1a0 Roberto Sassu 2022-09-20  1329  	int ret;
865b0566d8f1a0 Roberto Sassu 2022-09-20  1330  
865b0566d8f1a0 Roberto Sassu 2022-09-20  1331  	if (trusted_keyring->has_ref) {
865b0566d8f1a0 Roberto Sassu 2022-09-20  1332  		/*
865b0566d8f1a0 Roberto Sassu 2022-09-20  1333  		 * Do the permission check deferred in bpf_lookup_user_key().
865b0566d8f1a0 Roberto Sassu 2022-09-20  1334  		 * See bpf_lookup_user_key() for more details.
865b0566d8f1a0 Roberto Sassu 2022-09-20  1335  		 *
865b0566d8f1a0 Roberto Sassu 2022-09-20  1336  		 * A call to key_task_permission() here would be redundant, as
865b0566d8f1a0 Roberto Sassu 2022-09-20  1337  		 * it is already done by keyring_search() called by
865b0566d8f1a0 Roberto Sassu 2022-09-20  1338  		 * find_asymmetric_key().
865b0566d8f1a0 Roberto Sassu 2022-09-20  1339  		 */
865b0566d8f1a0 Roberto Sassu 2022-09-20  1340  		ret = key_validate(trusted_keyring->key);
865b0566d8f1a0 Roberto Sassu 2022-09-20  1341  		if (ret < 0)
865b0566d8f1a0 Roberto Sassu 2022-09-20  1342  			return ret;
865b0566d8f1a0 Roberto Sassu 2022-09-20  1343  	}
865b0566d8f1a0 Roberto Sassu 2022-09-20  1344  
865b0566d8f1a0 Roberto Sassu 2022-09-20  1345  	return verify_pkcs7_signature(data_ptr->data,
865b0566d8f1a0 Roberto Sassu 2022-09-20  1346  				      bpf_dynptr_get_size(data_ptr),
865b0566d8f1a0 Roberto Sassu 2022-09-20  1347  				      sig_ptr->data,
865b0566d8f1a0 Roberto Sassu 2022-09-20  1348  				      bpf_dynptr_get_size(sig_ptr),
865b0566d8f1a0 Roberto Sassu 2022-09-20  1349  				      trusted_keyring->key,
865b0566d8f1a0 Roberto Sassu 2022-09-20  1350  				      VERIFYING_UNSPECIFIED_SIGNATURE, NULL,
865b0566d8f1a0 Roberto Sassu 2022-09-20  1351  				      NULL);
865b0566d8f1a0 Roberto Sassu 2022-09-20  1352  }
865b0566d8f1a0 Roberto Sassu 2022-09-20  1353  #endif /* CONFIG_SYSTEM_DATA_VERIFICATION */
865b0566d8f1a0 Roberto Sassu 2022-09-20  1354
Jonathan Corbet Jan. 23, 2023, 9 p.m. UTC | #8
Daniel Borkmann <daniel@iogearbox.net> writes:

> Did you look into making this similar to the EXPORT_SYMBOL() infra? If possible
> that would look much more natural to developers, e.g. :
>
> struct nf_conn *
> bpf_skb_ct_lookup(struct __sk_buff *skb_ctx, struct bpf_sock_tuple *bpf_tuple,
>   		  u32 tuple__sz, struct bpf_ct_opts *opts, u32 opts__sz)
> {
> 	[...]
> }
>
> EXPORT_BPF_KFUNC(bpf_skb_ct_lookup);

That was my question too; it's a similar functionality that would be
nice to express in a similar way.  Even better, if possible, might be to
fold it into BTF_ID_FLAGS, which I might then rename to EXPORT_KFUNC()
or some such ... :)

Thanks,

jon
David Vernet Jan. 24, 2023, 12:54 a.m. UTC | #9
On Mon, Jan 23, 2023 at 02:00:17PM -0700, Jonathan Corbet wrote:
> Daniel Borkmann <daniel@iogearbox.net> writes:
> 
> > Did you look into making this similar to the EXPORT_SYMBOL() infra? If possible
> > that would look much more natural to developers, e.g. :
> >
> > struct nf_conn *
> > bpf_skb_ct_lookup(struct __sk_buff *skb_ctx, struct bpf_sock_tuple *bpf_tuple,
> >   		  u32 tuple__sz, struct bpf_ct_opts *opts, u32 opts__sz)
> > {
> > 	[...]
> > }
> >
> > EXPORT_BPF_KFUNC(bpf_skb_ct_lookup);
> 
> That was my question too; it's a similar functionality that would be
> nice to express in a similar way.  Even better, if possible, might be to
> fold it into BTF_ID_FLAGS, which I might then rename to EXPORT_KFUNC()
> or some such ... :)

Thanks Daniel and Jon for taking a look. These are good suggestions, and
I agree that using something like EXPORT_BPF_KFUNC or BTF_ID_FLAGS would
be preferred and probably a better and more intuitive UX.

I expect that it's going to require some nontrivial build integration
work to get this looking exactly like we want it to. AFAICT, one
difference between kfuncs and EXPORT_SYMBOL symbols is that
EXPORT_SYMBOL symbol signatures are all exported via headers, as the
intention is of course for them to be linked against by other core
kernel code or modules. For kfuncs, that's not really what we want given
that for many of the kfuncs we may not want non-BPF callers to invoke
them. I don't think any of this is impossible, just a SMOP.

I was perhaps a bit naive to think we could just throw a __bpf_kfunc
macro onto the function signatures and call it a day :-) I think it's
probably best to table this for now, and either I or someone else can
come back to it when we have bandwidth to solve the problem more
appropriately.

> 
> Thanks,
> 
> jon
Christoph Hellwig Jan. 24, 2023, 7:15 a.m. UTC | #10
I don't think this is the way to go.  For one the syntax looks odd,
and for another count we had a rough agreement at the kernel summit
to make BPF kfuncs look more like export symbols.

So can you please try to instad make this a EXPORT_SYMBOL_BPF that
looks and feels more like EXPORT_SYMBOL instead?
David Vernet Jan. 24, 2023, 2:15 p.m. UTC | #11
On Mon, Jan 23, 2023 at 11:15:04PM -0800, Christoph Hellwig wrote:
> I don't think this is the way to go.  For one the syntax looks odd,
> and for another count we had a rough agreement at the kernel summit
> to make BPF kfuncs look more like export symbols.
> 
> So can you please try to instad make this a EXPORT_SYMBOL_BPF that
> looks and feels more like EXPORT_SYMBOL instead?

Yeah, that matches what others (Daniel and Jon) have suggested in
another thread. I'll echo what I said in [0], which is essentially that
I agree with you all that something which more closely resembles
EXPORT_SYMBOL* is a better approach, and am fine with tabling this for
now until I have a bit more bandwidth to work on something that's a more
complete / appropriate solution.

[0]: https://lore.kernel.org/all/Y88sMlmrq0wCFSRP@maniforge.lan/

Thanks for the review! Will CC you on the v3, whenever that is.

- David
Jonathan Corbet Jan. 24, 2023, 2:50 p.m. UTC | #12
David Vernet <void@manifault.com> writes:

> I was perhaps a bit naive to think we could just throw a __bpf_kfunc
> macro onto the function signatures and call it a day :-) I think it's
> probably best to table this for now, and either I or someone else can
> come back to it when we have bandwidth to solve the problem more
> appropriately.

Now I feel bad ... I was just tossing out a thought, not wanting to
bikeshed this work into oblivion.  If what you have solves a real
problem and is the best that can be done now, perhaps it should just go
in and a "more appropriate" solution can be adopted later, should
somebody manage to come up with it?

Thanks,

jon
David Vernet Jan. 24, 2023, 4:20 p.m. UTC | #13
On Tue, Jan 24, 2023 at 07:50:31AM -0700, Jonathan Corbet wrote:
> David Vernet <void@manifault.com> writes:
> 
> > I was perhaps a bit naive to think we could just throw a __bpf_kfunc
> > macro onto the function signatures and call it a day :-) I think it's
> > probably best to table this for now, and either I or someone else can
> > come back to it when we have bandwidth to solve the problem more
> > appropriately.
> 
> Now I feel bad ... I was just tossing out a thought, not wanting to
> bikeshed this work into oblivion.  If what you have solves a real

No apologies necessary. I don't think this qualifies as bikeshedding.
IMO folks are raising legitimate UX concerns, which is important and
worth getting right.

> problem and is the best that can be done now, perhaps it should just go
> in and a "more appropriate" solution can be adopted later, should
> somebody manage to come up with it?

That would be my preference, but I also understand folks' sentiment of
wanting to keep out what they feel like is odd syntax, as Christoph said
in [0], and Daniel alluded to earlier in this thread.

[0]: https://lore.kernel.org/all/Y8+FeH7rz8jDTubt@infradead.org/

I tested on an LTO build and wrapper kfuncs (with external linkage) were
not being stripped despite not being called from anywhere else in the
kernel, so for now I _think_ it's safe to call this patch set more of a
cleanup / future-proofing than solving an immediate and pressing problem
(as long as anyone adding kfuncs carefully follows the directions in
[1]). In other words, I think we have some time to do this the right way
without paying too much of a cost later. If we set up the UX correctly,
just adding an EXPORT_SYMBOL_KFUNC call (or something to that effect,
including just using BTF_ID_FLAGS) should be minimal effort even if
there are a lot more kfuncs by then.

[1]: https://docs.kernel.org/bpf/kfuncs.html

If it turns out that we start to observe problems in LTO builds without
specifying __used and/or noinline, or if folks are repeatedly making
mistakes when adding kfuncs (by e.g. not giving wrapper kfuncs external
linkage) then I think it would be a stronger case to get this in now and
fix it up later.

Thanks,
David
Alan Maguire Jan. 31, 2023, 3:15 p.m. UTC | #14
On 24/01/2023 16:20, David Vernet wrote:
> On Tue, Jan 24, 2023 at 07:50:31AM -0700, Jonathan Corbet wrote:
>> David Vernet <void@manifault.com> writes:
>>
>>> I was perhaps a bit naive to think we could just throw a __bpf_kfunc
>>> macro onto the function signatures and call it a day :-) I think it's
>>> probably best to table this for now, and either I or someone else can
>>> come back to it when we have bandwidth to solve the problem more
>>> appropriately.
>>
>> Now I feel bad ... I was just tossing out a thought, not wanting to
>> bikeshed this work into oblivion.  If what you have solves a real
> 
> No apologies necessary. I don't think this qualifies as bikeshedding.
> IMO folks are raising legitimate UX concerns, which is important and
> worth getting right.
> 
>> problem and is the best that can be done now, perhaps it should just go
>> in and a "more appropriate" solution can be adopted later, should
>> somebody manage to come up with it?
> 
> That would be my preference, but I also understand folks' sentiment of
> wanting to keep out what they feel like is odd syntax, as Christoph said
> in [0], and Daniel alluded to earlier in this thread.
> 
> [0]: https://lore.kernel.org/all/Y8+FeH7rz8jDTubt@infradead.org/
> 
> I tested on an LTO build and wrapper kfuncs (with external linkage) were
> not being stripped despite not being called from anywhere else in the
> kernel, so for now I _think_ it's safe to call this patch set more of a
> cleanup / future-proofing than solving an immediate and pressing problem
> (as long as anyone adding kfuncs carefully follows the directions in
> [1]). In other words, I think we have some time to do this the right way
> without paying too much of a cost later. If we set up the UX correctly,
> just adding an EXPORT_SYMBOL_KFUNC call (or something to that effect,
> including just using BTF_ID_FLAGS) should be minimal effort even if
> there are a lot more kfuncs by then.
> 
> [1]: https://docs.kernel.org/bpf/kfuncs.html
> 
> If it turns out that we start to observe problems in LTO builds without
> specifying __used and/or noinline, or if folks are repeatedly making
> mistakes when adding kfuncs (by e.g. not giving wrapper kfuncs external
> linkage) then I think it would be a stronger case to get this in now and
> fix it up later.
>

hi David,

I think I may have stumbled upon such a case. We're working on improving
the relationship between the generated BPF Type Format (BTF) info
for the kernel and the actual function signatures, doing things like
spotting optimized-out parameters and not including such functions
in the final BTF since tracing such functions violates user expectations.
The changes also remove functions with inconsistent prototypes (same
name, different function prototype).

As part of that work [1], I ran into an issue with kfuncs. Because some of these
functions have minimal definitions, the compiler tries to be clever and as
a result parameters are not represented in DWARF. As a consequence of this,
we do not generate a BTF representation for the kfunc (since DWARF is telling
us the function has optimized-out parameters), and so then don't have BTF ids 
for the associated kfunc, which is then not usable. The issue of trace accuracy
is important for users, so we're hoping to land those changes in dwarves soon.
 
As described in [2] adding a prefixed

__attribute__ ((optimize("O0"))) 

...to the kfunc sorts this out, so having that attribute rolled into a prefix
definition like the one you've proposed would solve this in the short term.
There may be a better way to solve the problem I've run into, but I can't
see an easy solution right now.

Would the above be feasible do you think? Thanks!

Alan

[1] https://lore.kernel.org/bpf/1675088985-20300-1-git-send-email-alan.maguire@oracle.com/
[2] https://lore.kernel.org/bpf/fe5d42d1-faad-d05e-99ad-1c2c04776950@oracle.com/

 
> Thanks,
> David
>
David Vernet Jan. 31, 2023, 3:44 p.m. UTC | #15
On Tue, Jan 31, 2023 at 03:15:25PM +0000, Alan Maguire wrote:
> On 24/01/2023 16:20, David Vernet wrote:
> > On Tue, Jan 24, 2023 at 07:50:31AM -0700, Jonathan Corbet wrote:
> >> David Vernet <void@manifault.com> writes:
> >>
> >>> I was perhaps a bit naive to think we could just throw a __bpf_kfunc
> >>> macro onto the function signatures and call it a day :-) I think it's
> >>> probably best to table this for now, and either I or someone else can
> >>> come back to it when we have bandwidth to solve the problem more
> >>> appropriately.
> >>
> >> Now I feel bad ... I was just tossing out a thought, not wanting to
> >> bikeshed this work into oblivion.  If what you have solves a real
> > 
> > No apologies necessary. I don't think this qualifies as bikeshedding.
> > IMO folks are raising legitimate UX concerns, which is important and
> > worth getting right.
> > 
> >> problem and is the best that can be done now, perhaps it should just go
> >> in and a "more appropriate" solution can be adopted later, should
> >> somebody manage to come up with it?
> > 
> > That would be my preference, but I also understand folks' sentiment of
> > wanting to keep out what they feel like is odd syntax, as Christoph said
> > in [0], and Daniel alluded to earlier in this thread.
> > 
> > [0]: https://lore.kernel.org/all/Y8+FeH7rz8jDTubt@infradead.org/
> > 
> > I tested on an LTO build and wrapper kfuncs (with external linkage) were
> > not being stripped despite not being called from anywhere else in the
> > kernel, so for now I _think_ it's safe to call this patch set more of a
> > cleanup / future-proofing than solving an immediate and pressing problem
> > (as long as anyone adding kfuncs carefully follows the directions in
> > [1]). In other words, I think we have some time to do this the right way
> > without paying too much of a cost later. If we set up the UX correctly,
> > just adding an EXPORT_SYMBOL_KFUNC call (or something to that effect,
> > including just using BTF_ID_FLAGS) should be minimal effort even if
> > there are a lot more kfuncs by then.
> > 
> > [1]: https://docs.kernel.org/bpf/kfuncs.html
> > 
> > If it turns out that we start to observe problems in LTO builds without
> > specifying __used and/or noinline, or if folks are repeatedly making
> > mistakes when adding kfuncs (by e.g. not giving wrapper kfuncs external
> > linkage) then I think it would be a stronger case to get this in now and
> > fix it up later.
> >
> 
> hi David,
> 
> I think I may have stumbled upon such a case. We're working on improving
> the relationship between the generated BPF Type Format (BTF) info
> for the kernel and the actual function signatures, doing things like
> spotting optimized-out parameters and not including such functions
> in the final BTF since tracing such functions violates user expectations.
> The changes also remove functions with inconsistent prototypes (same
> name, different function prototype).
> 
> As part of that work [1], I ran into an issue with kfuncs. Because some of these
> functions have minimal definitions, the compiler tries to be clever and as
> a result parameters are not represented in DWARF. As a consequence of this,
> we do not generate a BTF representation for the kfunc (since DWARF is telling
> us the function has optimized-out parameters), and so then don't have BTF ids 
> for the associated kfunc, which is then not usable. The issue of trace accuracy
> is important for users, so we're hoping to land those changes in dwarves soon.

Hi Alan,

I see. Thanks for explaining. So it seems that maybe the issue is
slightly more urgent than we first thought. Given that folks aren't keen
on the BPF_KFUNC macro approach that wraps the function definition,
maybe we can go back to the __bpf_kfunc proposal from [0] as a stopgap
solution until we can properly support something like
EXPORT_SYMBOL_KFUNC. Alexei -- what do you think?

[0]: https://lore.kernel.org/bpf/Y7kCsjBZ%2FFrsWW%2Fe@maniforge.lan/T/

>  
> As described in [2] adding a prefixed
> 
> __attribute__ ((optimize("O0"))) 
> 
> ...to the kfunc sorts this out, so having that attribute rolled into a prefix
> definition like the one you've proposed would solve this in the short term.

Does just using __attribute__((__used__)) work? Many of these kfuncs are
called on hotpaths in BPF programs, so compiling them with no
optimization is not an ideal or likely even realistic option. Not to
mention the fact that not all kfuncs are BPF-exclusive (meaning you can
export a normal kernel function that's called by the main kernel as a
kfunc).

> There may be a better way to solve the problem I've run into, but I can't
> see an easy solution right now.
> 
> Would the above be feasible do you think? Thanks!
> 
> Alan
> 
> [1] https://lore.kernel.org/bpf/1675088985-20300-1-git-send-email-alan.maguire@oracle.com/
> [2] https://lore.kernel.org/bpf/fe5d42d1-faad-d05e-99ad-1c2c04776950@oracle.com/
> 
>  

Thanks,
David
Alexei Starovoitov Jan. 31, 2023, 5:30 p.m. UTC | #16
On Tue, Jan 31, 2023 at 7:44 AM David Vernet <void@manifault.com> wrote:
>
> On Tue, Jan 31, 2023 at 03:15:25PM +0000, Alan Maguire wrote:
> > On 24/01/2023 16:20, David Vernet wrote:
> > > On Tue, Jan 24, 2023 at 07:50:31AM -0700, Jonathan Corbet wrote:
> > >> David Vernet <void@manifault.com> writes:
> > >>
> > >>> I was perhaps a bit naive to think we could just throw a __bpf_kfunc
> > >>> macro onto the function signatures and call it a day :-) I think it's
> > >>> probably best to table this for now, and either I or someone else can
> > >>> come back to it when we have bandwidth to solve the problem more
> > >>> appropriately.
> > >>
> > >> Now I feel bad ... I was just tossing out a thought, not wanting to
> > >> bikeshed this work into oblivion.  If what you have solves a real
> > >
> > > No apologies necessary. I don't think this qualifies as bikeshedding.
> > > IMO folks are raising legitimate UX concerns, which is important and
> > > worth getting right.
> > >
> > >> problem and is the best that can be done now, perhaps it should just go
> > >> in and a "more appropriate" solution can be adopted later, should
> > >> somebody manage to come up with it?
> > >
> > > That would be my preference, but I also understand folks' sentiment of
> > > wanting to keep out what they feel like is odd syntax, as Christoph said
> > > in [0], and Daniel alluded to earlier in this thread.
> > >
> > > [0]: https://lore.kernel.org/all/Y8+FeH7rz8jDTubt@infradead.org/
> > >
> > > I tested on an LTO build and wrapper kfuncs (with external linkage) were
> > > not being stripped despite not being called from anywhere else in the
> > > kernel, so for now I _think_ it's safe to call this patch set more of a
> > > cleanup / future-proofing than solving an immediate and pressing problem
> > > (as long as anyone adding kfuncs carefully follows the directions in
> > > [1]). In other words, I think we have some time to do this the right way
> > > without paying too much of a cost later. If we set up the UX correctly,
> > > just adding an EXPORT_SYMBOL_KFUNC call (or something to that effect,
> > > including just using BTF_ID_FLAGS) should be minimal effort even if
> > > there are a lot more kfuncs by then.
> > >
> > > [1]: https://docs.kernel.org/bpf/kfuncs.html
> > >
> > > If it turns out that we start to observe problems in LTO builds without
> > > specifying __used and/or noinline, or if folks are repeatedly making
> > > mistakes when adding kfuncs (by e.g. not giving wrapper kfuncs external
> > > linkage) then I think it would be a stronger case to get this in now and
> > > fix it up later.
> > >
> >
> > hi David,
> >
> > I think I may have stumbled upon such a case. We're working on improving
> > the relationship between the generated BPF Type Format (BTF) info
> > for the kernel and the actual function signatures, doing things like
> > spotting optimized-out parameters and not including such functions
> > in the final BTF since tracing such functions violates user expectations.
> > The changes also remove functions with inconsistent prototypes (same
> > name, different function prototype).
> >
> > As part of that work [1], I ran into an issue with kfuncs. Because some of these
> > functions have minimal definitions, the compiler tries to be clever and as
> > a result parameters are not represented in DWARF. As a consequence of this,
> > we do not generate a BTF representation for the kfunc (since DWARF is telling
> > us the function has optimized-out parameters), and so then don't have BTF ids
> > for the associated kfunc, which is then not usable. The issue of trace accuracy
> > is important for users, so we're hoping to land those changes in dwarves soon.

Alan,

which kfuncs suffer from missing dwarf ?
I'm assuming that issues happens only with your new pahole patches
that are trying to detect all optimized out args, right?

> Hi Alan,
>
> I see. Thanks for explaining. So it seems that maybe the issue is
> slightly more urgent than we first thought. Given that folks aren't keen
> on the BPF_KFUNC macro approach that wraps the function definition,
> maybe we can go back to the __bpf_kfunc proposal from [0] as a stopgap
> solution until we can properly support something like
> EXPORT_SYMBOL_KFUNC. Alexei -- what do you think?
>
> [0]: https://lore.kernel.org/bpf/Y7kCsjBZ%2FFrsWW%2Fe@maniforge.lan/T/
>
> >
> > As described in [2] adding a prefixed
> >
> > __attribute__ ((optimize("O0")))

That won't work.
This attr sort-of "works" in gcc only, but it's discouraged
and officially considered broken by gcc folks.
There are projects open and close source that use this attr,
but it's very fragile.
Also we really don't want to reduce optimizations in kfuncs.
They need to be fast.

> >
> > ...to the kfunc sorts this out, so having that attribute rolled into a prefix
> > definition like the one you've proposed would solve this in the short term.
>
> Does just using __attribute__((__used__)) work? Many of these kfuncs are
> called on hotpaths in BPF programs, so compiling them with no
> optimization is not an ideal or likely even realistic option. Not to
> mention the fact that not all kfuncs are BPF-exclusive (meaning you can
> export a normal kernel function that's called by the main kernel as a
> kfunc).

let's annotate with __used and __weak to prevent compilers optimizing
things out. I think just __used won't be enough, but __weak should do
the trick. And noinline.
There is also __attribute__((visibility("hidden"))) to experiment with.
diff mbox series

Patch

diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index 458db2db2f81..51d102fdfc29 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -1772,11 +1772,7 @@  void bpf_list_head_free(const struct btf_field *field, void *list_head,
 	}
 }
 
-__diag_push();
-__diag_ignore_all("-Wmissing-prototypes",
-		  "Global functions as their definitions will be in vmlinux BTF");
-
-void *bpf_obj_new_impl(u64 local_type_id__k, void *meta__ign)
+BPF_KFUNC(void *bpf_obj_new_impl(u64 local_type_id__k, void *meta__ign))
 {
 	struct btf_struct_meta *meta = meta__ign;
 	u64 size = local_type_id__k;
@@ -1790,7 +1786,7 @@  void *bpf_obj_new_impl(u64 local_type_id__k, void *meta__ign)
 	return p;
 }
 
-void bpf_obj_drop_impl(void *p__alloc, void *meta__ign)
+BPF_KFUNC(void bpf_obj_drop_impl(void *p__alloc, void *meta__ign))
 {
 	struct btf_struct_meta *meta = meta__ign;
 	void *p = p__alloc;
@@ -1811,12 +1807,12 @@  static void __bpf_list_add(struct bpf_list_node *node, struct bpf_list_head *hea
 	tail ? list_add_tail(n, h) : list_add(n, h);
 }
 
-void bpf_list_push_front(struct bpf_list_head *head, struct bpf_list_node *node)
+BPF_KFUNC(void bpf_list_push_front(struct bpf_list_head *head, struct bpf_list_node *node))
 {
 	return __bpf_list_add(node, head, false);
 }
 
-void bpf_list_push_back(struct bpf_list_head *head, struct bpf_list_node *node)
+BPF_KFUNC(void bpf_list_push_back(struct bpf_list_head *head, struct bpf_list_node *node))
 {
 	return __bpf_list_add(node, head, true);
 }
@@ -1834,12 +1830,12 @@  static struct bpf_list_node *__bpf_list_del(struct bpf_list_head *head, bool tai
 	return (struct bpf_list_node *)n;
 }
 
-struct bpf_list_node *bpf_list_pop_front(struct bpf_list_head *head)
+BPF_KFUNC(struct bpf_list_node *bpf_list_pop_front(struct bpf_list_head *head))
 {
 	return __bpf_list_del(head, false);
 }
 
-struct bpf_list_node *bpf_list_pop_back(struct bpf_list_head *head)
+BPF_KFUNC(struct bpf_list_node *bpf_list_pop_back(struct bpf_list_head *head))
 {
 	return __bpf_list_del(head, true);
 }
@@ -1850,7 +1846,7 @@  struct bpf_list_node *bpf_list_pop_back(struct bpf_list_head *head)
  * bpf_task_release().
  * @p: The task on which a reference is being acquired.
  */
-struct task_struct *bpf_task_acquire(struct task_struct *p)
+BPF_KFUNC(struct task_struct *bpf_task_acquire(struct task_struct *p))
 {
 	return get_task_struct(p);
 }
@@ -1861,7 +1857,7 @@  struct task_struct *bpf_task_acquire(struct task_struct *p)
  * released by calling bpf_task_release().
  * @p: The task on which a reference is being acquired.
  */
-struct task_struct *bpf_task_acquire_not_zero(struct task_struct *p)
+BPF_KFUNC(struct task_struct *bpf_task_acquire_not_zero(struct task_struct *p))
 {
 	/* For the time being this function returns NULL, as it's not currently
 	 * possible to safely acquire a reference to a task with RCU protection
@@ -1913,7 +1909,7 @@  struct task_struct *bpf_task_acquire_not_zero(struct task_struct *p)
  * be released by calling bpf_task_release().
  * @pp: A pointer to a task kptr on which a reference is being acquired.
  */
-struct task_struct *bpf_task_kptr_get(struct task_struct **pp)
+BPF_KFUNC(struct task_struct *bpf_task_kptr_get(struct task_struct **pp))
 {
 	/* We must return NULL here until we have clarity on how to properly
 	 * leverage RCU for ensuring a task's lifetime. See the comment above
@@ -1926,7 +1922,7 @@  struct task_struct *bpf_task_kptr_get(struct task_struct **pp)
  * bpf_task_release - Release the reference acquired on a task.
  * @p: The task on which a reference is being released.
  */
-void bpf_task_release(struct task_struct *p)
+BPF_KFUNC(void bpf_task_release(struct task_struct *p))
 {
 	if (!p)
 		return;
@@ -1941,7 +1937,7 @@  void bpf_task_release(struct task_struct *p)
  * calling bpf_cgroup_release().
  * @cgrp: The cgroup on which a reference is being acquired.
  */
-struct cgroup *bpf_cgroup_acquire(struct cgroup *cgrp)
+BPF_KFUNC(struct cgroup *bpf_cgroup_acquire(struct cgroup *cgrp))
 {
 	cgroup_get(cgrp);
 	return cgrp;
@@ -1953,7 +1949,7 @@  struct cgroup *bpf_cgroup_acquire(struct cgroup *cgrp)
  * be released by calling bpf_cgroup_release().
  * @cgrpp: A pointer to a cgroup kptr on which a reference is being acquired.
  */
-struct cgroup *bpf_cgroup_kptr_get(struct cgroup **cgrpp)
+BPF_KFUNC(struct cgroup *bpf_cgroup_kptr_get(struct cgroup **cgrpp))
 {
 	struct cgroup *cgrp;
 
@@ -1985,7 +1981,7 @@  struct cgroup *bpf_cgroup_kptr_get(struct cgroup **cgrpp)
  * drops to 0.
  * @cgrp: The cgroup on which a reference is being released.
  */
-void bpf_cgroup_release(struct cgroup *cgrp)
+BPF_KFUNC(void bpf_cgroup_release(struct cgroup *cgrp))
 {
 	if (!cgrp)
 		return;
@@ -2000,7 +1996,7 @@  void bpf_cgroup_release(struct cgroup *cgrp)
  * @cgrp: The cgroup for which we're performing a lookup.
  * @level: The level of ancestor to look up.
  */
-struct cgroup *bpf_cgroup_ancestor(struct cgroup *cgrp, int level)
+BPF_KFUNC(struct cgroup *bpf_cgroup_ancestor(struct cgroup *cgrp, int level))
 {
 	struct cgroup *ancestor;
 
@@ -2019,7 +2015,7 @@  struct cgroup *bpf_cgroup_ancestor(struct cgroup *cgrp, int level)
  * stored in a map, or released with bpf_task_release().
  * @pid: The pid of the task being looked up.
  */
-struct task_struct *bpf_task_from_pid(s32 pid)
+BPF_KFUNC(struct task_struct *bpf_task_from_pid(s32 pid))
 {
 	struct task_struct *p;
 
@@ -2032,28 +2028,26 @@  struct task_struct *bpf_task_from_pid(s32 pid)
 	return p;
 }
 
-void *bpf_cast_to_kern_ctx(void *obj)
+BPF_KFUNC(void *bpf_cast_to_kern_ctx(void *obj))
 {
 	return obj;
 }
 
-void *bpf_rdonly_cast(void *obj__ign, u32 btf_id__k)
+BPF_KFUNC(void *bpf_rdonly_cast(void *obj__ign, u32 btf_id__k))
 {
 	return obj__ign;
 }
 
-void bpf_rcu_read_lock(void)
+BPF_KFUNC(void bpf_rcu_read_lock(void))
 {
 	rcu_read_lock();
 }
 
-void bpf_rcu_read_unlock(void)
+BPF_KFUNC(void bpf_rcu_read_unlock(void))
 {
 	rcu_read_unlock();
 }
 
-__diag_pop();
-
 BTF_SET8_START(generic_btf_ids)
 #ifdef CONFIG_KEXEC_CORE
 BTF_ID_FLAGS(func, crash_kexec, KF_DESTRUCTIVE)
diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c
index 793ecff29038..fbf41b434e1b 100644
--- a/kernel/cgroup/rstat.c
+++ b/kernel/cgroup/rstat.c
@@ -26,7 +26,7 @@  static struct cgroup_rstat_cpu *cgroup_rstat_cpu(struct cgroup *cgrp, int cpu)
  * rstat_cpu->updated_children list.  See the comment on top of
  * cgroup_rstat_cpu definition for details.
  */
-void cgroup_rstat_updated(struct cgroup *cgrp, int cpu)
+BPF_KFUNC(void cgroup_rstat_updated(struct cgroup *cgrp, int cpu))
 {
 	raw_spinlock_t *cpu_lock = per_cpu_ptr(&cgroup_rstat_cpu_lock, cpu);
 	unsigned long flags;
@@ -231,7 +231,7 @@  static void cgroup_rstat_flush_locked(struct cgroup *cgrp, bool may_sleep)
  *
  * This function may block.
  */
-void cgroup_rstat_flush(struct cgroup *cgrp)
+BPF_KFUNC(void cgroup_rstat_flush(struct cgroup *cgrp))
 {
 	might_sleep();
 
diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index 969e8f52f7da..6ea662dbc053 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -6,6 +6,7 @@ 
 
 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
 
+#include <linux/btf.h>
 #include <linux/capability.h>
 #include <linux/mm.h>
 #include <linux/file.h>
@@ -975,7 +976,7 @@  void __noclone __crash_kexec(struct pt_regs *regs)
 }
 STACK_FRAME_NON_STANDARD(__crash_kexec);
 
-void crash_kexec(struct pt_regs *regs)
+BPF_KFUNC(void crash_kexec(struct pt_regs *regs))
 {
 	int old_cpu, this_cpu;
 
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 8124f1ad0d4a..c30be8d3a6fc 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -1204,10 +1204,6 @@  static const struct bpf_func_proto bpf_get_func_arg_cnt_proto = {
 };
 
 #ifdef CONFIG_KEYS
-__diag_push();
-__diag_ignore_all("-Wmissing-prototypes",
-		  "kfuncs which will be used in BPF programs");
-
 /**
  * bpf_lookup_user_key - lookup a key by its serial
  * @serial: key handle serial number
@@ -1233,7 +1229,7 @@  __diag_ignore_all("-Wmissing-prototypes",
  * Return: a bpf_key pointer with a valid key pointer if the key is found, a
  *         NULL pointer otherwise.
  */
-struct bpf_key *bpf_lookup_user_key(u32 serial, u64 flags)
+BPF_KFUNC(struct bpf_key *bpf_lookup_user_key(u32 serial, u64 flags))
 {
 	key_ref_t key_ref;
 	struct bpf_key *bkey;
@@ -1282,7 +1278,7 @@  struct bpf_key *bpf_lookup_user_key(u32 serial, u64 flags)
  * Return: a bpf_key pointer with an invalid key pointer set from the
  *         pre-determined ID on success, a NULL pointer otherwise
  */
-struct bpf_key *bpf_lookup_system_key(u64 id)
+BPF_KFUNC(struct bpf_key *bpf_lookup_system_key(u64 id))
 {
 	struct bpf_key *bkey;
 
@@ -1306,7 +1302,7 @@  struct bpf_key *bpf_lookup_system_key(u64 id)
  * Decrement the reference count of the key inside *bkey*, if the pointer
  * is valid, and free *bkey*.
  */
-void bpf_key_put(struct bpf_key *bkey)
+BPF_KFUNC(void bpf_key_put(struct bpf_key *bkey))
 {
 	if (bkey->has_ref)
 		key_put(bkey->key);
@@ -1326,9 +1322,9 @@  void bpf_key_put(struct bpf_key *bkey)
  *
  * Return: 0 on success, a negative value on error.
  */
-int bpf_verify_pkcs7_signature(struct bpf_dynptr_kern *data_ptr,
-			       struct bpf_dynptr_kern *sig_ptr,
-			       struct bpf_key *trusted_keyring)
+BPF_KFUNC(int bpf_verify_pkcs7_signature(struct bpf_dynptr_kern *data_ptr,
+					 struct bpf_dynptr_kern *sig_ptr,
+					 struct bpf_key *trusted_keyring))
 {
 	int ret;
 
@@ -1356,8 +1352,6 @@  int bpf_verify_pkcs7_signature(struct bpf_dynptr_kern *data_ptr,
 }
 #endif /* CONFIG_SYSTEM_DATA_VERIFICATION */
 
-__diag_pop();
-
 BTF_SET8_START(key_sig_kfunc_set)
 BTF_ID_FLAGS(func, bpf_lookup_user_key, KF_ACQUIRE | KF_RET_NULL | KF_SLEEPABLE)
 BTF_ID_FLAGS(func, bpf_lookup_system_key, KF_ACQUIRE | KF_RET_NULL)
diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
index 2723623429ac..f024717b98a2 100644
--- a/net/bpf/test_run.c
+++ b/net/bpf/test_run.c
@@ -481,10 +481,7 @@  static int bpf_test_finish(const union bpf_attr *kattr,
  * architecture dependent calling conventions. 7+ can be supported in the
  * future.
  */
-__diag_push();
-__diag_ignore_all("-Wmissing-prototypes",
-		  "Global functions as their definitions will be in vmlinux BTF");
-int noinline bpf_fentry_test1(int a)
+BPF_KFUNC(int bpf_fentry_test1(int a))
 {
 	return a + 1;
 }
@@ -529,23 +526,23 @@  int noinline bpf_fentry_test8(struct bpf_fentry_test_t *arg)
 	return (long)arg->a;
 }
 
-int noinline bpf_modify_return_test(int a, int *b)
+BPF_KFUNC(int bpf_modify_return_test(int a, int *b))
 {
 	*b += 1;
 	return a + *b;
 }
 
-u64 noinline bpf_kfunc_call_test1(struct sock *sk, u32 a, u64 b, u32 c, u64 d)
+BPF_KFUNC(u64 bpf_kfunc_call_test1(struct sock *sk, u32 a, u64 b, u32 c, u64 d))
 {
 	return a + b + c + d;
 }
 
-int noinline bpf_kfunc_call_test2(struct sock *sk, u32 a, u32 b)
+BPF_KFUNC(int bpf_kfunc_call_test2(struct sock *sk, u32 a, u32 b))
 {
 	return a + b;
 }
 
-struct sock * noinline bpf_kfunc_call_test3(struct sock *sk)
+BPF_KFUNC(struct sock *bpf_kfunc_call_test3(struct sock *sk))
 {
 	return sk;
 }
@@ -574,21 +571,19 @@  static struct prog_test_ref_kfunc prog_test_struct = {
 	.cnt = REFCOUNT_INIT(1),
 };
 
-noinline struct prog_test_ref_kfunc *
-bpf_kfunc_call_test_acquire(unsigned long *scalar_ptr)
+BPF_KFUNC(struct prog_test_ref_kfunc *bpf_kfunc_call_test_acquire(unsigned long *scalar_ptr))
 {
 	refcount_inc(&prog_test_struct.cnt);
 	return &prog_test_struct;
 }
 
-noinline struct prog_test_member *
-bpf_kfunc_call_memb_acquire(void)
+BPF_KFUNC(struct prog_test_member *bpf_kfunc_call_memb_acquire(void))
 {
 	WARN_ON_ONCE(1);
 	return NULL;
 }
 
-noinline void bpf_kfunc_call_test_release(struct prog_test_ref_kfunc *p)
+BPF_KFUNC(void bpf_kfunc_call_test_release(struct prog_test_ref_kfunc *p))
 {
 	if (!p)
 		return;
@@ -596,11 +591,11 @@  noinline void bpf_kfunc_call_test_release(struct prog_test_ref_kfunc *p)
 	refcount_dec(&p->cnt);
 }
 
-noinline void bpf_kfunc_call_memb_release(struct prog_test_member *p)
+BPF_KFUNC(void bpf_kfunc_call_memb_release(struct prog_test_member *p))
 {
 }
 
-noinline void bpf_kfunc_call_memb1_release(struct prog_test_member1 *p)
+BPF_KFUNC(void bpf_kfunc_call_memb1_release(struct prog_test_member1 *p))
 {
 	WARN_ON_ONCE(1);
 }
@@ -613,12 +608,14 @@  static int *__bpf_kfunc_call_test_get_mem(struct prog_test_ref_kfunc *p, const i
 	return (int *)p;
 }
 
-noinline int *bpf_kfunc_call_test_get_rdwr_mem(struct prog_test_ref_kfunc *p, const int rdwr_buf_size)
+BPF_KFUNC(int *bpf_kfunc_call_test_get_rdwr_mem(struct prog_test_ref_kfunc *p,
+						const int rdwr_buf_size))
 {
 	return __bpf_kfunc_call_test_get_mem(p, rdwr_buf_size);
 }
 
-noinline int *bpf_kfunc_call_test_get_rdonly_mem(struct prog_test_ref_kfunc *p, const int rdonly_buf_size)
+BPF_KFUNC(int *bpf_kfunc_call_test_get_rdonly_mem(struct prog_test_ref_kfunc *p,
+						  const int rdonly_buf_size))
 {
 	return __bpf_kfunc_call_test_get_mem(p, rdonly_buf_size);
 }
@@ -628,17 +625,18 @@  noinline int *bpf_kfunc_call_test_get_rdonly_mem(struct prog_test_ref_kfunc *p,
  * Acquire functions must return struct pointers, so these ones are
  * failing.
  */
-noinline int *bpf_kfunc_call_test_acq_rdonly_mem(struct prog_test_ref_kfunc *p, const int rdonly_buf_size)
+BPF_KFUNC(int *bpf_kfunc_call_test_acq_rdonly_mem(struct prog_test_ref_kfunc *p,
+						  const int rdonly_buf_size))
 {
 	return __bpf_kfunc_call_test_get_mem(p, rdonly_buf_size);
 }
 
-noinline void bpf_kfunc_call_int_mem_release(int *p)
+BPF_KFUNC(void bpf_kfunc_call_int_mem_release(int *p))
 {
 }
 
-noinline struct prog_test_ref_kfunc *
-bpf_kfunc_call_test_kptr_get(struct prog_test_ref_kfunc **pp, int a, int b)
+BPF_KFUNC(struct prog_test_ref_kfunc *
+bpf_kfunc_call_test_kptr_get(struct prog_test_ref_kfunc **pp, int a, int b))
 {
 	struct prog_test_ref_kfunc *p = READ_ONCE(*pp);
 
@@ -686,52 +684,50 @@  struct prog_test_fail3 {
 	char arr2[];
 };
 
-noinline void bpf_kfunc_call_test_pass_ctx(struct __sk_buff *skb)
+BPF_KFUNC(void bpf_kfunc_call_test_pass_ctx(struct __sk_buff *skb))
 {
 }
 
-noinline void bpf_kfunc_call_test_pass1(struct prog_test_pass1 *p)
+BPF_KFUNC(void bpf_kfunc_call_test_pass1(struct prog_test_pass1 *p))
 {
 }
 
-noinline void bpf_kfunc_call_test_pass2(struct prog_test_pass2 *p)
+BPF_KFUNC(void bpf_kfunc_call_test_pass2(struct prog_test_pass2 *p))
 {
 }
 
-noinline void bpf_kfunc_call_test_fail1(struct prog_test_fail1 *p)
+BPF_KFUNC(void bpf_kfunc_call_test_fail1(struct prog_test_fail1 *p))
 {
 }
 
-noinline void bpf_kfunc_call_test_fail2(struct prog_test_fail2 *p)
+BPF_KFUNC(void bpf_kfunc_call_test_fail2(struct prog_test_fail2 *p))
 {
 }
 
-noinline void bpf_kfunc_call_test_fail3(struct prog_test_fail3 *p)
+BPF_KFUNC(void bpf_kfunc_call_test_fail3(struct prog_test_fail3 *p))
 {
 }
 
-noinline void bpf_kfunc_call_test_mem_len_pass1(void *mem, int mem__sz)
+BPF_KFUNC(void bpf_kfunc_call_test_mem_len_pass1(void *mem, int mem__sz))
 {
 }
 
-noinline void bpf_kfunc_call_test_mem_len_fail1(void *mem, int len)
+BPF_KFUNC(void bpf_kfunc_call_test_mem_len_fail1(void *mem, int len))
 {
 }
 
-noinline void bpf_kfunc_call_test_mem_len_fail2(u64 *mem, int len)
+BPF_KFUNC(void bpf_kfunc_call_test_mem_len_fail2(u64 *mem, int len))
 {
 }
 
-noinline void bpf_kfunc_call_test_ref(struct prog_test_ref_kfunc *p)
+BPF_KFUNC(void bpf_kfunc_call_test_ref(struct prog_test_ref_kfunc *p))
 {
 }
 
-noinline void bpf_kfunc_call_test_destructive(void)
+BPF_KFUNC(void bpf_kfunc_call_test_destructive(void))
 {
 }
 
-__diag_pop();
-
 BTF_SET8_START(bpf_test_modify_return_ids)
 BTF_ID_FLAGS(func, bpf_modify_return_test)
 BTF_ID_FLAGS(func, bpf_fentry_test1, KF_SLEEPABLE)
diff --git a/net/ipv4/tcp_bbr.c b/net/ipv4/tcp_bbr.c
index d2c470524e58..a3f6d3ab4182 100644
--- a/net/ipv4/tcp_bbr.c
+++ b/net/ipv4/tcp_bbr.c
@@ -295,7 +295,7 @@  static void bbr_set_pacing_rate(struct sock *sk, u32 bw, int gain)
 }
 
 /* override sysctl_tcp_min_tso_segs */
-static u32 bbr_min_tso_segs(struct sock *sk)
+BPF_KFUNC(static u32 bbr_min_tso_segs(struct sock *sk))
 {
 	return sk->sk_pacing_rate < (bbr_min_tso_rate >> 3) ? 1 : 2;
 }
@@ -328,7 +328,7 @@  static void bbr_save_cwnd(struct sock *sk)
 		bbr->prior_cwnd = max(bbr->prior_cwnd, tcp_snd_cwnd(tp));
 }
 
-static void bbr_cwnd_event(struct sock *sk, enum tcp_ca_event event)
+BPF_KFUNC(static void bbr_cwnd_event(struct sock *sk, enum tcp_ca_event event))
 {
 	struct tcp_sock *tp = tcp_sk(sk);
 	struct bbr *bbr = inet_csk_ca(sk);
@@ -1023,7 +1023,7 @@  static void bbr_update_model(struct sock *sk, const struct rate_sample *rs)
 	bbr_update_gains(sk);
 }
 
-static void bbr_main(struct sock *sk, const struct rate_sample *rs)
+BPF_KFUNC(static void bbr_main(struct sock *sk, const struct rate_sample *rs))
 {
 	struct bbr *bbr = inet_csk_ca(sk);
 	u32 bw;
@@ -1035,7 +1035,7 @@  static void bbr_main(struct sock *sk, const struct rate_sample *rs)
 	bbr_set_cwnd(sk, rs, rs->acked_sacked, bw, bbr->cwnd_gain);
 }
 
-static void bbr_init(struct sock *sk)
+BPF_KFUNC(static void bbr_init(struct sock *sk))
 {
 	struct tcp_sock *tp = tcp_sk(sk);
 	struct bbr *bbr = inet_csk_ca(sk);
@@ -1077,7 +1077,7 @@  static void bbr_init(struct sock *sk)
 	cmpxchg(&sk->sk_pacing_status, SK_PACING_NONE, SK_PACING_NEEDED);
 }
 
-static u32 bbr_sndbuf_expand(struct sock *sk)
+BPF_KFUNC(static u32 bbr_sndbuf_expand(struct sock *sk))
 {
 	/* Provision 3 * cwnd since BBR may slow-start even during recovery. */
 	return 3;
@@ -1086,7 +1086,7 @@  static u32 bbr_sndbuf_expand(struct sock *sk)
 /* In theory BBR does not need to undo the cwnd since it does not
  * always reduce cwnd on losses (see bbr_main()). Keep it for now.
  */
-static u32 bbr_undo_cwnd(struct sock *sk)
+BPF_KFUNC(static u32 bbr_undo_cwnd(struct sock *sk))
 {
 	struct bbr *bbr = inet_csk_ca(sk);
 
@@ -1097,7 +1097,7 @@  static u32 bbr_undo_cwnd(struct sock *sk)
 }
 
 /* Entering loss recovery, so save cwnd for when we exit or undo recovery. */
-static u32 bbr_ssthresh(struct sock *sk)
+BPF_KFUNC(static u32 bbr_ssthresh(struct sock *sk))
 {
 	bbr_save_cwnd(sk);
 	return tcp_sk(sk)->snd_ssthresh;
@@ -1125,7 +1125,7 @@  static size_t bbr_get_info(struct sock *sk, u32 ext, int *attr,
 	return 0;
 }
 
-static void bbr_set_state(struct sock *sk, u8 new_state)
+BPF_KFUNC(static void bbr_set_state(struct sock *sk, u8 new_state))
 {
 	struct bbr *bbr = inet_csk_ca(sk);
 
diff --git a/net/ipv4/tcp_cong.c b/net/ipv4/tcp_cong.c
index d3cae40749e8..0a7ca941af65 100644
--- a/net/ipv4/tcp_cong.c
+++ b/net/ipv4/tcp_cong.c
@@ -403,7 +403,7 @@  int tcp_set_congestion_control(struct sock *sk, const char *name, bool load,
  * ABC caps N to 2. Slow start exits when cwnd grows over ssthresh and
  * returns the leftover acks to adjust cwnd in congestion avoidance mode.
  */
-u32 tcp_slow_start(struct tcp_sock *tp, u32 acked)
+BPF_KFUNC(u32 tcp_slow_start(struct tcp_sock *tp, u32 acked))
 {
 	u32 cwnd = min(tcp_snd_cwnd(tp) + acked, tp->snd_ssthresh);
 
@@ -417,7 +417,7 @@  EXPORT_SYMBOL_GPL(tcp_slow_start);
 /* In theory this is tp->snd_cwnd += 1 / tp->snd_cwnd (or alternative w),
  * for every packet that was ACKed.
  */
-void tcp_cong_avoid_ai(struct tcp_sock *tp, u32 w, u32 acked)
+BPF_KFUNC(void tcp_cong_avoid_ai(struct tcp_sock *tp, u32 w, u32 acked))
 {
 	/* If credits accumulated at a higher w, apply them gently now. */
 	if (tp->snd_cwnd_cnt >= w) {
@@ -443,7 +443,7 @@  EXPORT_SYMBOL_GPL(tcp_cong_avoid_ai);
 /* This is Jacobson's slow start and congestion avoidance.
  * SIGCOMM '88, p. 328.
  */
-void tcp_reno_cong_avoid(struct sock *sk, u32 ack, u32 acked)
+BPF_KFUNC(void tcp_reno_cong_avoid(struct sock *sk, u32 ack, u32 acked))
 {
 	struct tcp_sock *tp = tcp_sk(sk);
 
@@ -462,7 +462,7 @@  void tcp_reno_cong_avoid(struct sock *sk, u32 ack, u32 acked)
 EXPORT_SYMBOL_GPL(tcp_reno_cong_avoid);
 
 /* Slow start threshold is half the congestion window (min 2) */
-u32 tcp_reno_ssthresh(struct sock *sk)
+BPF_KFUNC(u32 tcp_reno_ssthresh(struct sock *sk))
 {
 	const struct tcp_sock *tp = tcp_sk(sk);
 
@@ -470,7 +470,7 @@  u32 tcp_reno_ssthresh(struct sock *sk)
 }
 EXPORT_SYMBOL_GPL(tcp_reno_ssthresh);
 
-u32 tcp_reno_undo_cwnd(struct sock *sk)
+BPF_KFUNC(u32 tcp_reno_undo_cwnd(struct sock *sk))
 {
 	const struct tcp_sock *tp = tcp_sk(sk);
 
diff --git a/net/ipv4/tcp_cubic.c b/net/ipv4/tcp_cubic.c
index 768c10c1f649..bad9f3adba7f 100644
--- a/net/ipv4/tcp_cubic.c
+++ b/net/ipv4/tcp_cubic.c
@@ -126,7 +126,7 @@  static inline void bictcp_hystart_reset(struct sock *sk)
 	ca->sample_cnt = 0;
 }
 
-static void cubictcp_init(struct sock *sk)
+BPF_KFUNC(static void cubictcp_init(struct sock *sk))
 {
 	struct bictcp *ca = inet_csk_ca(sk);
 
@@ -139,7 +139,7 @@  static void cubictcp_init(struct sock *sk)
 		tcp_sk(sk)->snd_ssthresh = initial_ssthresh;
 }
 
-static void cubictcp_cwnd_event(struct sock *sk, enum tcp_ca_event event)
+BPF_KFUNC(static void cubictcp_cwnd_event(struct sock *sk, enum tcp_ca_event event))
 {
 	if (event == CA_EVENT_TX_START) {
 		struct bictcp *ca = inet_csk_ca(sk);
@@ -321,7 +321,7 @@  static inline void bictcp_update(struct bictcp *ca, u32 cwnd, u32 acked)
 	ca->cnt = max(ca->cnt, 2U);
 }
 
-static void cubictcp_cong_avoid(struct sock *sk, u32 ack, u32 acked)
+BPF_KFUNC(static void cubictcp_cong_avoid(struct sock *sk, u32 ack, u32 acked))
 {
 	struct tcp_sock *tp = tcp_sk(sk);
 	struct bictcp *ca = inet_csk_ca(sk);
@@ -338,7 +338,7 @@  static void cubictcp_cong_avoid(struct sock *sk, u32 ack, u32 acked)
 	tcp_cong_avoid_ai(tp, ca->cnt, acked);
 }
 
-static u32 cubictcp_recalc_ssthresh(struct sock *sk)
+BPF_KFUNC(static u32 cubictcp_recalc_ssthresh(struct sock *sk))
 {
 	const struct tcp_sock *tp = tcp_sk(sk);
 	struct bictcp *ca = inet_csk_ca(sk);
@@ -355,7 +355,7 @@  static u32 cubictcp_recalc_ssthresh(struct sock *sk)
 	return max((tcp_snd_cwnd(tp) * beta) / BICTCP_BETA_SCALE, 2U);
 }
 
-static void cubictcp_state(struct sock *sk, u8 new_state)
+BPF_KFUNC(static void cubictcp_state(struct sock *sk, u8 new_state))
 {
 	if (new_state == TCP_CA_Loss) {
 		bictcp_reset(inet_csk_ca(sk));
@@ -445,7 +445,7 @@  static void hystart_update(struct sock *sk, u32 delay)
 	}
 }
 
-static void cubictcp_acked(struct sock *sk, const struct ack_sample *sample)
+BPF_KFUNC(static void cubictcp_acked(struct sock *sk, const struct ack_sample *sample))
 {
 	const struct tcp_sock *tp = tcp_sk(sk);
 	struct bictcp *ca = inet_csk_ca(sk);
diff --git a/net/ipv4/tcp_dctcp.c b/net/ipv4/tcp_dctcp.c
index e0a2ca7456ff..3748099f014e 100644
--- a/net/ipv4/tcp_dctcp.c
+++ b/net/ipv4/tcp_dctcp.c
@@ -75,7 +75,7 @@  static void dctcp_reset(const struct tcp_sock *tp, struct dctcp *ca)
 	ca->old_delivered_ce = tp->delivered_ce;
 }
 
-static void dctcp_init(struct sock *sk)
+BPF_KFUNC(static void dctcp_init(struct sock *sk))
 {
 	const struct tcp_sock *tp = tcp_sk(sk);
 
@@ -104,7 +104,7 @@  static void dctcp_init(struct sock *sk)
 	INET_ECN_dontxmit(sk);
 }
 
-static u32 dctcp_ssthresh(struct sock *sk)
+BPF_KFUNC(static u32 dctcp_ssthresh(struct sock *sk))
 {
 	struct dctcp *ca = inet_csk_ca(sk);
 	struct tcp_sock *tp = tcp_sk(sk);
@@ -113,7 +113,7 @@  static u32 dctcp_ssthresh(struct sock *sk)
 	return max(tcp_snd_cwnd(tp) - ((tcp_snd_cwnd(tp) * ca->dctcp_alpha) >> 11U), 2U);
 }
 
-static void dctcp_update_alpha(struct sock *sk, u32 flags)
+BPF_KFUNC(static void dctcp_update_alpha(struct sock *sk, u32 flags))
 {
 	const struct tcp_sock *tp = tcp_sk(sk);
 	struct dctcp *ca = inet_csk_ca(sk);
@@ -169,7 +169,7 @@  static void dctcp_react_to_loss(struct sock *sk)
 	tp->snd_ssthresh = max(tcp_snd_cwnd(tp) >> 1U, 2U);
 }
 
-static void dctcp_state(struct sock *sk, u8 new_state)
+BPF_KFUNC(static void dctcp_state(struct sock *sk, u8 new_state))
 {
 	if (new_state == TCP_CA_Recovery &&
 	    new_state != inet_csk(sk)->icsk_ca_state)
@@ -179,7 +179,7 @@  static void dctcp_state(struct sock *sk, u8 new_state)
 	 */
 }
 
-static void dctcp_cwnd_event(struct sock *sk, enum tcp_ca_event ev)
+BPF_KFUNC(static void dctcp_cwnd_event(struct sock *sk, enum tcp_ca_event ev))
 {
 	struct dctcp *ca = inet_csk_ca(sk);
 
@@ -229,7 +229,7 @@  static size_t dctcp_get_info(struct sock *sk, u32 ext, int *attr,
 	return 0;
 }
 
-static u32 dctcp_cwnd_undo(struct sock *sk)
+BPF_KFUNC(static u32 dctcp_cwnd_undo(struct sock *sk))
 {
 	const struct dctcp *ca = inet_csk_ca(sk);
 	struct tcp_sock *tp = tcp_sk(sk);
diff --git a/net/netfilter/nf_conntrack_bpf.c b/net/netfilter/nf_conntrack_bpf.c
index 24002bc61e07..5512c8f563a7 100644
--- a/net/netfilter/nf_conntrack_bpf.c
+++ b/net/netfilter/nf_conntrack_bpf.c
@@ -230,10 +230,6 @@  static int _nf_conntrack_btf_struct_access(struct bpf_verifier_log *log,
 	return 0;
 }
 
-__diag_push();
-__diag_ignore_all("-Wmissing-prototypes",
-		  "Global functions as their definitions will be in nf_conntrack BTF");
-
 /* bpf_xdp_ct_alloc - Allocate a new CT entry
  *
  * Parameters:
@@ -249,9 +245,9 @@  __diag_ignore_all("-Wmissing-prototypes",
  * @opts__sz	- Length of the bpf_ct_opts structure
  *		    Must be NF_BPF_CT_OPTS_SZ (12)
  */
-struct nf_conn___init *
+BPF_KFUNC(struct nf_conn___init *
 bpf_xdp_ct_alloc(struct xdp_md *xdp_ctx, struct bpf_sock_tuple *bpf_tuple,
-		 u32 tuple__sz, struct bpf_ct_opts *opts, u32 opts__sz)
+		 u32 tuple__sz, struct bpf_ct_opts *opts, u32 opts__sz))
 {
 	struct xdp_buff *ctx = (struct xdp_buff *)xdp_ctx;
 	struct nf_conn *nfct;
@@ -283,9 +279,9 @@  bpf_xdp_ct_alloc(struct xdp_md *xdp_ctx, struct bpf_sock_tuple *bpf_tuple,
  * @opts__sz	- Length of the bpf_ct_opts structure
  *		    Must be NF_BPF_CT_OPTS_SZ (12)
  */
-struct nf_conn *
+BPF_KFUNC(struct nf_conn *
 bpf_xdp_ct_lookup(struct xdp_md *xdp_ctx, struct bpf_sock_tuple *bpf_tuple,
-		  u32 tuple__sz, struct bpf_ct_opts *opts, u32 opts__sz)
+		  u32 tuple__sz, struct bpf_ct_opts *opts, u32 opts__sz))
 {
 	struct xdp_buff *ctx = (struct xdp_buff *)xdp_ctx;
 	struct net *caller_net;
@@ -316,9 +312,9 @@  bpf_xdp_ct_lookup(struct xdp_md *xdp_ctx, struct bpf_sock_tuple *bpf_tuple,
  * @opts__sz	- Length of the bpf_ct_opts structure
  *		    Must be NF_BPF_CT_OPTS_SZ (12)
  */
-struct nf_conn___init *
+BPF_KFUNC(struct nf_conn___init *
 bpf_skb_ct_alloc(struct __sk_buff *skb_ctx, struct bpf_sock_tuple *bpf_tuple,
-		 u32 tuple__sz, struct bpf_ct_opts *opts, u32 opts__sz)
+		 u32 tuple__sz, struct bpf_ct_opts *opts, u32 opts__sz))
 {
 	struct sk_buff *skb = (struct sk_buff *)skb_ctx;
 	struct nf_conn *nfct;
@@ -351,9 +347,9 @@  bpf_skb_ct_alloc(struct __sk_buff *skb_ctx, struct bpf_sock_tuple *bpf_tuple,
  * @opts__sz	- Length of the bpf_ct_opts structure
  *		    Must be NF_BPF_CT_OPTS_SZ (12)
  */
-struct nf_conn *
+BPF_KFUNC(struct nf_conn *
 bpf_skb_ct_lookup(struct __sk_buff *skb_ctx, struct bpf_sock_tuple *bpf_tuple,
-		  u32 tuple__sz, struct bpf_ct_opts *opts, u32 opts__sz)
+		  u32 tuple__sz, struct bpf_ct_opts *opts, u32 opts__sz))
 {
 	struct sk_buff *skb = (struct sk_buff *)skb_ctx;
 	struct net *caller_net;
@@ -376,7 +372,7 @@  bpf_skb_ct_lookup(struct __sk_buff *skb_ctx, struct bpf_sock_tuple *bpf_tuple,
  * @nfct	 - Pointer to referenced nf_conn___init object, obtained
  *		   using bpf_xdp_ct_alloc or bpf_skb_ct_alloc.
  */
-struct nf_conn *bpf_ct_insert_entry(struct nf_conn___init *nfct_i)
+BPF_KFUNC(struct nf_conn *bpf_ct_insert_entry(struct nf_conn___init *nfct_i))
 {
 	struct nf_conn *nfct = (struct nf_conn *)nfct_i;
 	int err;
@@ -400,7 +396,7 @@  struct nf_conn *bpf_ct_insert_entry(struct nf_conn___init *nfct_i)
  * @nf_conn	 - Pointer to referenced nf_conn object, obtained using
  *		   bpf_xdp_ct_lookup or bpf_skb_ct_lookup.
  */
-void bpf_ct_release(struct nf_conn *nfct)
+BPF_KFUNC(void bpf_ct_release(struct nf_conn *nfct))
 {
 	if (!nfct)
 		return;
@@ -417,7 +413,7 @@  void bpf_ct_release(struct nf_conn *nfct)
  *                 bpf_xdp_ct_alloc or bpf_skb_ct_alloc.
  * @timeout      - Timeout in msecs.
  */
-void bpf_ct_set_timeout(struct nf_conn___init *nfct, u32 timeout)
+BPF_KFUNC(void bpf_ct_set_timeout(struct nf_conn___init *nfct, u32 timeout))
 {
 	__nf_ct_set_timeout((struct nf_conn *)nfct, msecs_to_jiffies(timeout));
 }
@@ -432,7 +428,7 @@  void bpf_ct_set_timeout(struct nf_conn___init *nfct, u32 timeout)
  *		   bpf_ct_insert_entry, bpf_xdp_ct_lookup, or bpf_skb_ct_lookup.
  * @timeout      - New timeout in msecs.
  */
-int bpf_ct_change_timeout(struct nf_conn *nfct, u32 timeout)
+BPF_KFUNC(int bpf_ct_change_timeout(struct nf_conn *nfct, u32 timeout))
 {
 	return __nf_ct_change_timeout(nfct, msecs_to_jiffies(timeout));
 }
@@ -447,7 +443,7 @@  int bpf_ct_change_timeout(struct nf_conn *nfct, u32 timeout)
  *		   bpf_xdp_ct_alloc or bpf_skb_ct_alloc.
  * @status       - New status value.
  */
-int bpf_ct_set_status(const struct nf_conn___init *nfct, u32 status)
+BPF_KFUNC(int bpf_ct_set_status(const struct nf_conn___init *nfct, u32 status))
 {
 	return nf_ct_change_status_common((struct nf_conn *)nfct, status);
 }
@@ -462,13 +458,11 @@  int bpf_ct_set_status(const struct nf_conn___init *nfct, u32 status)
  *		   bpf_ct_insert_entry, bpf_xdp_ct_lookup or bpf_skb_ct_lookup.
  * @status       - New status value.
  */
-int bpf_ct_change_status(struct nf_conn *nfct, u32 status)
+BPF_KFUNC(int bpf_ct_change_status(struct nf_conn *nfct, u32 status))
 {
 	return nf_ct_change_status_common(nfct, status);
 }
 
-__diag_pop()
-
 BTF_SET8_START(nf_ct_kfunc_set)
 BTF_ID_FLAGS(func, bpf_xdp_ct_alloc, KF_ACQUIRE | KF_RET_NULL)
 BTF_ID_FLAGS(func, bpf_xdp_ct_lookup, KF_ACQUIRE | KF_RET_NULL)
diff --git a/net/netfilter/nf_nat_bpf.c b/net/netfilter/nf_nat_bpf.c
index 0fa5a0bbb0ff..8de652043819 100644
--- a/net/netfilter/nf_nat_bpf.c
+++ b/net/netfilter/nf_nat_bpf.c
@@ -12,10 +12,6 @@ 
 #include <net/netfilter/nf_conntrack_core.h>
 #include <net/netfilter/nf_nat.h>
 
-__diag_push();
-__diag_ignore_all("-Wmissing-prototypes",
-		  "Global functions as their definitions will be in nf_nat BTF");
-
 /* bpf_ct_set_nat_info - Set source or destination nat address
  *
  * Set source or destination nat address of the newly allocated
@@ -30,9 +26,9 @@  __diag_ignore_all("-Wmissing-prototypes",
  *		  interpreted as select a random port.
  * @manip	- NF_NAT_MANIP_SRC or NF_NAT_MANIP_DST
  */
-int bpf_ct_set_nat_info(struct nf_conn___init *nfct,
-			union nf_inet_addr *addr, int port,
-			enum nf_nat_manip_type manip)
+BPF_KFUNC(int bpf_ct_set_nat_info(struct nf_conn___init *nfct,
+				  union nf_inet_addr *addr, int port,
+				  enum nf_nat_manip_type manip))
 {
 	struct nf_conn *ct = (struct nf_conn *)nfct;
 	u16 proto = nf_ct_l3num(ct);
@@ -54,8 +50,6 @@  int bpf_ct_set_nat_info(struct nf_conn___init *nfct,
 	return nf_nat_setup_info(ct, &range, manip) == NF_DROP ? -ENOMEM : 0;
 }
 
-__diag_pop()
-
 BTF_SET8_START(nf_nat_kfunc_set)
 BTF_ID_FLAGS(func, bpf_ct_set_nat_info, KF_TRUSTED_ARGS)
 BTF_SET8_END(nf_nat_kfunc_set)
diff --git a/net/xfrm/xfrm_interface_bpf.c b/net/xfrm/xfrm_interface_bpf.c
index 1ef2162cebcf..b2b2ae603e4d 100644
--- a/net/xfrm/xfrm_interface_bpf.c
+++ b/net/xfrm/xfrm_interface_bpf.c
@@ -27,10 +27,6 @@  struct bpf_xfrm_info {
 	int link;
 };
 
-__diag_push();
-__diag_ignore_all("-Wmissing-prototypes",
-		  "Global functions as their definitions will be in xfrm_interface BTF");
-
 /* bpf_skb_get_xfrm_info - Get XFRM metadata
  *
  * Parameters:
@@ -39,8 +35,7 @@  __diag_ignore_all("-Wmissing-prototypes",
  * @to		- Pointer to memory to which the metadata will be copied
  *		    Cannot be NULL
  */
-__used noinline
-int bpf_skb_get_xfrm_info(struct __sk_buff *skb_ctx, struct bpf_xfrm_info *to)
+BPF_KFUNC(int bpf_skb_get_xfrm_info(struct __sk_buff *skb_ctx, struct bpf_xfrm_info *to))
 {
 	struct sk_buff *skb = (struct sk_buff *)skb_ctx;
 	struct xfrm_md_info *info;
@@ -62,9 +57,8 @@  int bpf_skb_get_xfrm_info(struct __sk_buff *skb_ctx, struct bpf_xfrm_info *to)
  * @from	- Pointer to memory from which the metadata will be copied
  *		    Cannot be NULL
  */
-__used noinline
-int bpf_skb_set_xfrm_info(struct __sk_buff *skb_ctx,
-			  const struct bpf_xfrm_info *from)
+BPF_KFUNC(int bpf_skb_set_xfrm_info(struct __sk_buff *skb_ctx,
+				    const struct bpf_xfrm_info *from))
 {
 	struct sk_buff *skb = (struct sk_buff *)skb_ctx;
 	struct metadata_dst *md_dst;
@@ -96,8 +90,6 @@  int bpf_skb_set_xfrm_info(struct __sk_buff *skb_ctx,
 	return 0;
 }
 
-__diag_pop()
-
 BTF_SET8_START(xfrm_ifc_kfunc_set)
 BTF_ID_FLAGS(func, bpf_skb_get_xfrm_info)
 BTF_ID_FLAGS(func, bpf_skb_set_xfrm_info)
diff --git a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
index 5085fea3cac5..9392e5e406ec 100644
--- a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
+++ b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
@@ -59,8 +59,7 @@  bpf_testmod_test_struct_arg_5(void) {
 	return bpf_testmod_test_struct_arg_result;
 }
 
-noinline void
-bpf_testmod_test_mod_kfunc(int i)
+BPF_KFUNC(void bpf_testmod_test_mod_kfunc(int i))
 {
 	*(int *)this_cpu_ptr(&bpf_testmod_ksym_percpu) = i;
 }