Message ID | 20230326221612.169289-1-xiyou.wangcong@gmail.com (mailing list archive) |
---|---|
State | Changes Requested |
Delegated to: | BPF |
Headers | show |
Series | [bpf-next] sock_map: include sk_psock memory overhead too | expand |
On Mon, Mar 27, 2023 at 6:16 AM Cong Wang <xiyou.wangcong@gmail.com> wrote: > > From: Cong Wang <cong.wang@bytedance.com> > > When a socket is added to a sockmap, sk_psock is allocated too as its > sk_user_data, therefore it should be consider as an overhead of sockmap > memory usage. > > Before this patch: > > 1: sockmap flags 0x0 > key 4B value 4B max_entries 2 memlock 656B > pids echo-sockmap(549) > > After this patch: > > 9: sockmap flags 0x0 > key 4B value 4B max_entries 2 memlock 1824B > pids echo-sockmap(568) > > Fixes: 73d2c61919e9 ("bpf, net: sock_map memory usage") > Cc: Yafang Shao <laoar.shao@gmail.com> > Cc: Jakub Sitnicki <jakub@cloudflare.com> > Cc: John Fastabend <john.fastabend@gmail.com> > Signed-off-by: Cong Wang <cong.wang@bytedance.com> > --- > net/core/sock_map.c | 10 +++++++++- > 1 file changed, 9 insertions(+), 1 deletion(-) > > diff --git a/net/core/sock_map.c b/net/core/sock_map.c > index 7c189c2e2fbf..22197e565ece 100644 > --- a/net/core/sock_map.c > +++ b/net/core/sock_map.c > @@ -799,9 +799,17 @@ static void sock_map_fini_seq_private(void *priv_data) > > static u64 sock_map_mem_usage(const struct bpf_map *map) > { > + struct bpf_stab *stab = container_of(map, struct bpf_stab, map); > u64 usage = sizeof(struct bpf_stab); > + int i; > > usage += (u64)map->max_entries * sizeof(struct sock *); > + > + for (i = 0; i < stab->map.max_entries; i++) { Although it adds a for-loop, the operation below is quite light. So it looks good to me. > + if (stab->sks[i]) Nit, stab->sks[i] can be modified in the delete path in parallel, so there should be a READ_ONCE() here. > + usage += sizeof(struct sk_psock); > + } > + > return usage; > } > > @@ -1412,7 +1420,7 @@ static u64 sock_hash_mem_usage(const struct bpf_map *map) > u64 usage = sizeof(*htab); > > usage += htab->buckets_num * sizeof(struct bpf_shtab_bucket); > - usage += atomic_read(&htab->count) * (u64)htab->elem_size; > + usage += atomic_read(&htab->count) * ((u64)htab->elem_size + sizeof(struct sk_psock)); > return usage; > } > > -- > 2.34.1 >
Yafang Shao wrote: > On Mon, Mar 27, 2023 at 6:16 AM Cong Wang <xiyou.wangcong@gmail.com> wrote: > > > > From: Cong Wang <cong.wang@bytedance.com> > > > > When a socket is added to a sockmap, sk_psock is allocated too as its > > sk_user_data, therefore it should be consider as an overhead of sockmap > > memory usage. > > > > Before this patch: > > > > 1: sockmap flags 0x0 > > key 4B value 4B max_entries 2 memlock 656B > > pids echo-sockmap(549) > > > > After this patch: > > > > 9: sockmap flags 0x0 > > key 4B value 4B max_entries 2 memlock 1824B > > pids echo-sockmap(568) > > > > Fixes: 73d2c61919e9 ("bpf, net: sock_map memory usage") > > Cc: Yafang Shao <laoar.shao@gmail.com> > > Cc: Jakub Sitnicki <jakub@cloudflare.com> > > Cc: John Fastabend <john.fastabend@gmail.com> > > Signed-off-by: Cong Wang <cong.wang@bytedance.com> > > --- > > net/core/sock_map.c | 10 +++++++++- > > 1 file changed, 9 insertions(+), 1 deletion(-) > > > > diff --git a/net/core/sock_map.c b/net/core/sock_map.c > > index 7c189c2e2fbf..22197e565ece 100644 > > --- a/net/core/sock_map.c > > +++ b/net/core/sock_map.c > > @@ -799,9 +799,17 @@ static void sock_map_fini_seq_private(void *priv_data) > > > > static u64 sock_map_mem_usage(const struct bpf_map *map) > > { > > + struct bpf_stab *stab = container_of(map, struct bpf_stab, map); > > u64 usage = sizeof(struct bpf_stab); > > + int i; > > > > usage += (u64)map->max_entries * sizeof(struct sock *); > > + > > + for (i = 0; i < stab->map.max_entries; i++) { > > Although it adds a for-loop, the operation below is quite light. So it > looks good to me. We could track a count from update to avoid the loop? > > > + if (stab->sks[i]) > > Nit, stab->sks[i] can be modified in the delete path in parallel, so > there should be a READ_ONCE() here. > > > + usage += sizeof(struct sk_psock); > > + } > > + > > return usage; > > } > > > > @@ -1412,7 +1420,7 @@ static u64 sock_hash_mem_usage(const struct bpf_map *map) > > u64 usage = sizeof(*htab); > > > > usage += htab->buckets_num * sizeof(struct bpf_shtab_bucket); > > - usage += atomic_read(&htab->count) * (u64)htab->elem_size; > > + usage += atomic_read(&htab->count) * ((u64)htab->elem_size + sizeof(struct sk_psock)); > > return usage; > > } > > > > -- > > 2.34.1 > > > > > -- > Regards > Yafang
On Sat, Apr 1, 2023 at 9:09 AM John Fastabend <john.fastabend@gmail.com> wrote: > > Yafang Shao wrote: > > On Mon, Mar 27, 2023 at 6:16 AM Cong Wang <xiyou.wangcong@gmail.com> wrote: > > > > > > From: Cong Wang <cong.wang@bytedance.com> > > > > > > When a socket is added to a sockmap, sk_psock is allocated too as its > > > sk_user_data, therefore it should be consider as an overhead of sockmap > > > memory usage. > > > > > > Before this patch: > > > > > > 1: sockmap flags 0x0 > > > key 4B value 4B max_entries 2 memlock 656B > > > pids echo-sockmap(549) > > > > > > After this patch: > > > > > > 9: sockmap flags 0x0 > > > key 4B value 4B max_entries 2 memlock 1824B > > > pids echo-sockmap(568) > > > > > > Fixes: 73d2c61919e9 ("bpf, net: sock_map memory usage") > > > Cc: Yafang Shao <laoar.shao@gmail.com> > > > Cc: Jakub Sitnicki <jakub@cloudflare.com> > > > Cc: John Fastabend <john.fastabend@gmail.com> > > > Signed-off-by: Cong Wang <cong.wang@bytedance.com> > > > --- > > > net/core/sock_map.c | 10 +++++++++- > > > 1 file changed, 9 insertions(+), 1 deletion(-) > > > > > > diff --git a/net/core/sock_map.c b/net/core/sock_map.c > > > index 7c189c2e2fbf..22197e565ece 100644 > > > --- a/net/core/sock_map.c > > > +++ b/net/core/sock_map.c > > > @@ -799,9 +799,17 @@ static void sock_map_fini_seq_private(void *priv_data) > > > > > > static u64 sock_map_mem_usage(const struct bpf_map *map) > > > { > > > + struct bpf_stab *stab = container_of(map, struct bpf_stab, map); > > > u64 usage = sizeof(struct bpf_stab); > > > + int i; > > > > > > usage += (u64)map->max_entries * sizeof(struct sock *); > > > + > > > + for (i = 0; i < stab->map.max_entries; i++) { > > > > Although it adds a for-loop, the operation below is quite light. So it > > looks good to me. > > We could track a count from update to avoid the loop? > I prefer adding a count into struct bpf_stab. We can also get the number of socks easily with this new count, and it should be acceptable to modify this count in the update/delete paths.
diff --git a/net/core/sock_map.c b/net/core/sock_map.c index 7c189c2e2fbf..22197e565ece 100644 --- a/net/core/sock_map.c +++ b/net/core/sock_map.c @@ -799,9 +799,17 @@ static void sock_map_fini_seq_private(void *priv_data) static u64 sock_map_mem_usage(const struct bpf_map *map) { + struct bpf_stab *stab = container_of(map, struct bpf_stab, map); u64 usage = sizeof(struct bpf_stab); + int i; usage += (u64)map->max_entries * sizeof(struct sock *); + + for (i = 0; i < stab->map.max_entries; i++) { + if (stab->sks[i]) + usage += sizeof(struct sk_psock); + } + return usage; } @@ -1412,7 +1420,7 @@ static u64 sock_hash_mem_usage(const struct bpf_map *map) u64 usage = sizeof(*htab); usage += htab->buckets_num * sizeof(struct bpf_shtab_bucket); - usage += atomic_read(&htab->count) * (u64)htab->elem_size; + usage += atomic_read(&htab->count) * ((u64)htab->elem_size + sizeof(struct sk_psock)); return usage; }