Message ID | 20230628015634.33193-4-alexei.starovoitov@gmail.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | bpf: Introduce bpf_mem_cache_free_rcu(). | expand |
On 6/28/2023 9:56 AM, Alexei Starovoitov wrote: > From: Alexei Starovoitov <ast@kernel.org> > > Let free_all() helper return the number of freed elements. > It's not used in this patch, but helps in debug/development of bpf_mem_alloc. > > For example this diff for __free_rcu(): > - free_all(llist_del_all(&c->waiting_for_gp_ttrace), !!c->percpu_size); > + printk("cpu %d freed %d objs after tasks trace\n", raw_smp_processor_id(), > + free_all(llist_del_all(&c->waiting_for_gp_ttrace), !!c->percpu_size)); > > would show how busy RCU tasks trace is. > In artificial benchmark where one cpu is allocating and different cpu is freeing > the RCU tasks trace won't be able to keep up and the list of objects > would keep growing from thousands to millions and eventually OOMing. > > Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Hou Tao <houtao1@huawei.com>
diff --git a/kernel/bpf/memalloc.c b/kernel/bpf/memalloc.c index b0011217be6c..693651d2648b 100644 --- a/kernel/bpf/memalloc.c +++ b/kernel/bpf/memalloc.c @@ -223,12 +223,16 @@ static void free_one(void *obj, bool percpu) kfree(obj); } -static void free_all(struct llist_node *llnode, bool percpu) +static int free_all(struct llist_node *llnode, bool percpu) { struct llist_node *pos, *t; + int cnt = 0; - llist_for_each_safe(pos, t, llnode) + llist_for_each_safe(pos, t, llnode) { free_one(pos, percpu); + cnt++; + } + return cnt; } static void __free_rcu(struct rcu_head *head)