Message ID | 20200707173612.124425-3-guro@fb.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [1/3] mm: memcg/slab: remove unused argument by charge_slab_page() | expand |
On Tue, Jul 7, 2020 at 10:36 AM Roman Gushchin <guro@fb.com> wrote: > > Currently memcg_kmem_enabled() is optimized for the kernel memory > accounting being off. It was so for a long time, and arguably the > reason behind was that the kernel memory accounting was initially an > opt-in feature. However, now it's on by default on both cgroup v1 > and cgroup v2, and it's on for all cgroups. So let's switch over > to static_branch_likely() to reflect this fact. > > Unlikely there is a significant performance difference, as the cost > of a memory allocation and its accounting significantly exceeds the > cost of a jump. However, the conversion makes the code look more > logically. > > Signed-off-by: Roman Gushchin <guro@fb.com> Reviewed-by: Shakeel Butt <shakeelb@google.com>
On 7/7/20 7:36 PM, Roman Gushchin wrote: > Currently memcg_kmem_enabled() is optimized for the kernel memory > accounting being off. It was so for a long time, and arguably the > reason behind was that the kernel memory accounting was initially an > opt-in feature. However, now it's on by default on both cgroup v1 > and cgroup v2, and it's on for all cgroups. So let's switch over > to static_branch_likely() to reflect this fact. > > Unlikely there is a significant performance difference, as the cost > of a memory allocation and its accounting significantly exceeds the > cost of a jump. However, the conversion makes the code look more > logically. > > Signed-off-by: Roman Gushchin <guro@fb.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> > --- > include/linux/memcontrol.h | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > index b8f52a3fed90..ab9322215b2e 100644 > --- a/include/linux/memcontrol.h > +++ b/include/linux/memcontrol.h > @@ -1456,7 +1456,7 @@ void memcg_put_cache_ids(void); > > static inline bool memcg_kmem_enabled(void) > { > - return static_branch_unlikely(&memcg_kmem_enabled_key); > + return static_branch_likely(&memcg_kmem_enabled_key); > } > > static inline bool memcg_kmem_bypass(void) >
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index b8f52a3fed90..ab9322215b2e 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1456,7 +1456,7 @@ void memcg_put_cache_ids(void); static inline bool memcg_kmem_enabled(void) { - return static_branch_unlikely(&memcg_kmem_enabled_key); + return static_branch_likely(&memcg_kmem_enabled_key); } static inline bool memcg_kmem_bypass(void)
Currently memcg_kmem_enabled() is optimized for the kernel memory accounting being off. It was so for a long time, and arguably the reason behind was that the kernel memory accounting was initially an opt-in feature. However, now it's on by default on both cgroup v1 and cgroup v2, and it's on for all cgroups. So let's switch over to static_branch_likely() to reflect this fact. Unlikely there is a significant performance difference, as the cost of a memory allocation and its accounting significantly exceeds the cost of a jump. However, the conversion makes the code look more logically. Signed-off-by: Roman Gushchin <guro@fb.com> --- include/linux/memcontrol.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)