diff mbox series

[net-next,v5,1/3] net-memcg: Fold dependency into memcg pressure cond

Message ID 20230602081135.75424-2-wuyun.abel@bytedance.com (mailing list archive)
State Changes Requested
Delegated to: Netdev Maintainers
Headers show
Series sock: Improve condition on sockmem pressure | expand

Checks

Context Check Description
netdev/series_format success Posting correctly formatted
netdev/tree_selection success Clearly marked for net-next, async
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 8351 this patch: 8351
netdev/cc_maintainers warning 3 maintainers not CCed: akpm@linux-foundation.org mhocko@suse.com roman.gushchin@linux.dev
netdev/build_clang success Errors and warnings before: 2256 this patch: 2256
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 9033 this patch: 9033
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 26 lines checked
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Abel Wu June 2, 2023, 8:11 a.m. UTC
The callers of mem_cgroup_under_socket_pressure() should always make
sure that (mem_cgroup_sockets_enabled && sk->sk_memcg) is true. So
instead of coding around all the callsites, put the dependencies into
mem_cgroup_under_socket_pressure() to avoid redundancy and possibly
bugs.

This change might also introduce slight function call overhead *iff*
the function gets expanded in the future. But for now this change
doesn't make binaries different (checked by vimdiff) except the one
net/ipv4/tcp_input.o (by scripts/bloat-o-meter), which is probably
negligible to performance:

add/remove: 0/0 grow/shrink: 1/2 up/down: 5/-5 (0)
Function                                     old     new   delta
tcp_grow_window                              573     578      +5
tcp_try_rmem_schedule                       1083    1081      -2
tcp_check_space                              324     321      -3
Total: Before=44647, After=44647, chg +0.00%

So folding the dependencies into mem_cgroup_under_socket_pressure()
is generally a good thing and provides better readablility.

Signed-off-by: Abel Wu <wuyun.abel@bytedance.com>
---
 include/linux/memcontrol.h | 2 ++
 include/net/sock.h         | 3 +--
 include/net/tcp.h          | 3 +--
 3 files changed, 4 insertions(+), 4 deletions(-)

Comments

Shakeel Butt June 2, 2023, 8:25 p.m. UTC | #1
On Fri, Jun 02, 2023 at 04:11:33PM +0800, Abel Wu wrote:
> The callers of mem_cgroup_under_socket_pressure() should always make
> sure that (mem_cgroup_sockets_enabled && sk->sk_memcg) is true. So
> instead of coding around all the callsites, put the dependencies into
> mem_cgroup_under_socket_pressure() to avoid redundancy and possibly
> bugs.
> 
> This change might also introduce slight function call overhead *iff*
> the function gets expanded in the future. But for now this change
> doesn't make binaries different (checked by vimdiff) except the one
> net/ipv4/tcp_input.o (by scripts/bloat-o-meter), which is probably
> negligible to performance:
> 
> add/remove: 0/0 grow/shrink: 1/2 up/down: 5/-5 (0)
> Function                                     old     new   delta
> tcp_grow_window                              573     578      +5
> tcp_try_rmem_schedule                       1083    1081      -2
> tcp_check_space                              324     321      -3
> Total: Before=44647, After=44647, chg +0.00%
> 
> So folding the dependencies into mem_cgroup_under_socket_pressure()
> is generally a good thing and provides better readablility.
> 

I don't see how it is improving readability. If you have removed the use
of mem_cgroup_sockets_enabled completely from the networking then I can
understand but this change IMHO will actually decrease the readability
because the later readers will have to reason why we are doing this
check at some places but not other.
Abel Wu June 5, 2023, 11:52 a.m. UTC | #2
On 6/3/23 4:25 AM, Shakeel Butt wrote:
> On Fri, Jun 02, 2023 at 04:11:33PM +0800, Abel Wu wrote:
>> The callers of mem_cgroup_under_socket_pressure() should always make
>> sure that (mem_cgroup_sockets_enabled && sk->sk_memcg) is true. So
>> instead of coding around all the callsites, put the dependencies into
>> mem_cgroup_under_socket_pressure() to avoid redundancy and possibly
>> bugs.
>>
>> This change might also introduce slight function call overhead *iff*
>> the function gets expanded in the future. But for now this change
>> doesn't make binaries different (checked by vimdiff) except the one
>> net/ipv4/tcp_input.o (by scripts/bloat-o-meter), which is probably
>> negligible to performance:
>>
>> add/remove: 0/0 grow/shrink: 1/2 up/down: 5/-5 (0)
>> Function                                     old     new   delta
>> tcp_grow_window                              573     578      +5
>> tcp_try_rmem_schedule                       1083    1081      -2
>> tcp_check_space                              324     321      -3
>> Total: Before=44647, After=44647, chg +0.00%
>>
>> So folding the dependencies into mem_cgroup_under_socket_pressure()
>> is generally a good thing and provides better readablility.
>>
> 
> I don't see how it is improving readability. If you have removed the use
> of mem_cgroup_sockets_enabled completely from the networking then I can
> understand but this change IMHO will actually decrease the readability
> because the later readers will have to reason why we are doing this
> check at some places but not other.

Yes, I agree. I am trying to let networking get rid of this macro
entirely, but get stuck on inet_csk_accept().. :(
diff mbox series

Patch

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 222d7370134c..a1aead140ff8 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1743,6 +1743,8 @@  void mem_cgroup_sk_alloc(struct sock *sk);
 void mem_cgroup_sk_free(struct sock *sk);
 static inline bool mem_cgroup_under_socket_pressure(struct mem_cgroup *memcg)
 {
+	if (!mem_cgroup_sockets_enabled || !memcg)
+		return false;
 	if (!cgroup_subsys_on_dfl(memory_cgrp_subsys) && memcg->tcpmem_pressure)
 		return true;
 	do {
diff --git a/include/net/sock.h b/include/net/sock.h
index 656ea89f60ff..3f63253ee092 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -1414,8 +1414,7 @@  static inline bool sk_under_memory_pressure(const struct sock *sk)
 	if (!sk->sk_prot->memory_pressure)
 		return false;
 
-	if (mem_cgroup_sockets_enabled && sk->sk_memcg &&
-	    mem_cgroup_under_socket_pressure(sk->sk_memcg))
+	if (mem_cgroup_under_socket_pressure(sk->sk_memcg))
 		return true;
 
 	return !!*sk->sk_prot->memory_pressure;
diff --git a/include/net/tcp.h b/include/net/tcp.h
index 14fa716cac50..d4c358bc0c52 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -259,8 +259,7 @@  extern unsigned long tcp_memory_pressure;
 /* optimized version of sk_under_memory_pressure() for TCP sockets */
 static inline bool tcp_under_memory_pressure(const struct sock *sk)
 {
-	if (mem_cgroup_sockets_enabled && sk->sk_memcg &&
-	    mem_cgroup_under_socket_pressure(sk->sk_memcg))
+	if (mem_cgroup_under_socket_pressure(sk->sk_memcg))
 		return true;
 
 	return READ_ONCE(tcp_memory_pressure);