Message ID | 20241107161026.2903044-2-aleksander.lobakin@intel.com (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | xdp: a fistful of generic changes (+libeth_xdp) | expand |
diff --git a/kernel/jump_label.c b/kernel/jump_label.c index 93a822d3c468..1034c0348995 100644 --- a/kernel/jump_label.c +++ b/kernel/jump_label.c @@ -182,6 +182,7 @@ bool static_key_slow_inc_cpuslocked(struct static_key *key) } return true; } +EXPORT_SYMBOL_GPL(static_key_slow_inc_cpuslocked); bool static_key_slow_inc(struct static_key *key) { @@ -342,6 +343,7 @@ void static_key_slow_dec_cpuslocked(struct static_key *key) STATIC_KEY_CHECK_USE(key); __static_key_slow_dec_cpuslocked(key); } +EXPORT_SYMBOL_GPL(static_key_slow_dec_cpuslocked); void __static_key_slow_dec_deferred(struct static_key *key, struct delayed_work *work,
Sometimes, there's a need to modify a lot of static keys or modify the same key multiple times in a loop. In that case, it seems more optimal to lock cpu_read_lock once and then call _cpuslocked() variants. The enable/disable functions are already exported, the refcounted counterparts however are not. Fix that to allow modules to save some cycles. Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> --- kernel/jump_label.c | 2 ++ 1 file changed, 2 insertions(+)