diff mbox series

[v3,bpf-next,06/13] bpf: Further refactor alloc_bulk().

Message ID 20230628015634.33193-7-alexei.starovoitov@gmail.com (mailing list archive)
State Superseded
Headers show
Series bpf: Introduce bpf_mem_cache_free_rcu(). | expand

Commit Message

Alexei Starovoitov June 28, 2023, 1:56 a.m. UTC
From: Alexei Starovoitov <ast@kernel.org>

In certain scenarios alloc_bulk() migth be taking free objects mainly from
free_by_rcu_ttrace list. In such case get_memcg() and set_active_memcg() are
redundant, but they show up in perf profile. Split the loop and only set memcg
when allocating from slab. No performance difference in this patch alone, but
it helps in combination with further patches.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 kernel/bpf/memalloc.c | 30 ++++++++++++++++++------------
 1 file changed, 18 insertions(+), 12 deletions(-)

Comments

Hou Tao June 28, 2023, 6:25 a.m. UTC | #1
On 6/28/2023 9:56 AM, Alexei Starovoitov wrote:
> From: Alexei Starovoitov <ast@kernel.org>
>
> In certain scenarios alloc_bulk() migth be taking free objects mainly from
> free_by_rcu_ttrace list. In such case get_memcg() and set_active_memcg() are
> redundant, but they show up in perf profile. Split the loop and only set memcg
> when allocating from slab. No performance difference in this patch alone, but
> it helps in combination with further patches.
>
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Hou Tao <houtao1@huawei.com>
Simon Horman June 28, 2023, 3:48 p.m. UTC | #2
On Tue, Jun 27, 2023 at 06:56:27PM -0700, Alexei Starovoitov wrote:
> From: Alexei Starovoitov <ast@kernel.org>
> 
> In certain scenarios alloc_bulk() migth be taking free objects mainly from

Hi Alexi,

checkpatch --codespell flags: 'migth' -> 'might'
It also flags some typos in several other patches in this series.
But it seems silly to flag them individually. So I'll leave this topic here.

...
Alexei Starovoitov June 28, 2023, 6:58 p.m. UTC | #3
On Wed, Jun 28, 2023 at 8:48 AM Simon Horman <simon.horman@corigine.com> wrote:
>
> On Tue, Jun 27, 2023 at 06:56:27PM -0700, Alexei Starovoitov wrote:
> > From: Alexei Starovoitov <ast@kernel.org>
> >
> > In certain scenarios alloc_bulk() migth be taking free objects mainly from
>
> Hi Alexi,
>
> checkpatch --codespell flags: 'migth' -> 'might'
> It also flags some typos in several other patches in this series.
> But it seems silly to flag them individually. So I'll leave this topic here.

Thanks for flagging.
Did you find this manually? bpf/netdev CI doesn't report such things.
Simon Horman June 28, 2023, 7:30 p.m. UTC | #4
On Wed, Jun 28, 2023 at 11:58:23AM -0700, Alexei Starovoitov wrote:
> On Wed, Jun 28, 2023 at 8:48 AM Simon Horman <simon.horman@corigine.com> wrote:
> >
> > On Tue, Jun 27, 2023 at 06:56:27PM -0700, Alexei Starovoitov wrote:
> > > From: Alexei Starovoitov <ast@kernel.org>
> > >
> > > In certain scenarios alloc_bulk() migth be taking free objects mainly from
> >
> > Hi Alexi,
> >
> > checkpatch --codespell flags: 'migth' -> 'might'
> > It also flags some typos in several other patches in this series.
> > But it seems silly to flag them individually. So I'll leave this topic here.
> 
> Thanks for flagging.
> Did you find this manually? bpf/netdev CI doesn't report such things.

Hi Alexei,

I found this outside of bpf/netdev CI.
Perhaps we (I?) should work on enabling it there?
diff mbox series

Patch

diff --git a/kernel/bpf/memalloc.c b/kernel/bpf/memalloc.c
index 052fc801fb9f..0ee566a7719a 100644
--- a/kernel/bpf/memalloc.c
+++ b/kernel/bpf/memalloc.c
@@ -196,8 +196,6 @@  static void alloc_bulk(struct bpf_mem_cache *c, int cnt, int node)
 	void *obj;
 	int i;
 
-	memcg = get_memcg(c);
-	old_memcg = set_active_memcg(memcg);
 	for (i = 0; i < cnt; i++) {
 		/*
 		 * free_by_rcu_ttrace is only manipulated by irq work refill_work().
@@ -212,16 +210,24 @@  static void alloc_bulk(struct bpf_mem_cache *c, int cnt, int node)
 		 * numa node and it is not a guarantee.
 		 */
 		obj = __llist_del_first(&c->free_by_rcu_ttrace);
-		if (!obj) {
-			/* Allocate, but don't deplete atomic reserves that typical
-			 * GFP_ATOMIC would do. irq_work runs on this cpu and kmalloc
-			 * will allocate from the current numa node which is what we
-			 * want here.
-			 */
-			obj = __alloc(c, node, GFP_NOWAIT | __GFP_NOWARN | __GFP_ACCOUNT);
-			if (!obj)
-				break;
-		}
+		if (!obj)
+			break;
+		add_obj_to_free_list(c, obj);
+	}
+	if (i >= cnt)
+		return;
+
+	memcg = get_memcg(c);
+	old_memcg = set_active_memcg(memcg);
+	for (; i < cnt; i++) {
+		/* Allocate, but don't deplete atomic reserves that typical
+		 * GFP_ATOMIC would do. irq_work runs on this cpu and kmalloc
+		 * will allocate from the current numa node which is what we
+		 * want here.
+		 */
+		obj = __alloc(c, node, GFP_NOWAIT | __GFP_NOWARN | __GFP_ACCOUNT);
+		if (!obj)
+			break;
 		add_obj_to_free_list(c, obj);
 	}
 	set_active_memcg(old_memcg);