diff mbox series

[net-next,2/3] net: netpoll: Individualize the skb pool

Message ID 20241025142025.3558051-3-leitao@debian.org (mailing list archive)
State Superseded
Delegated to: Netdev Maintainers
Headers show
Series net: netpoll: Improve SKB pool management | expand

Checks

Context Check Description
netdev/series_format success Posting correctly formatted
netdev/tree_selection success Clearly marked for net-next
netdev/ynl success Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 5 this patch: 5
netdev/build_tools success Errors and warnings before: 157 (+0) this patch: 157 (+0)
netdev/cc_maintainers success CCed 6 of 6 maintainers
netdev/build_clang success Errors and warnings before: 3 this patch: 3
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 25 this patch: 25
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 79 lines checked
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0
netdev/contest success net-next-2024-10-31--09-00 (tests: 779)

Commit Message

Breno Leitao Oct. 25, 2024, 2:20 p.m. UTC
The current implementation of the netpoll system uses a global skb pool,
which can lead to inefficient memory usage and waste when targets are
disabled or no longer in use.

This can result in a significant amount of memory being unnecessarily
allocated and retained, potentially causing performance issues and
limiting the availability of resources for other system components.

Modify the netpoll system to assign a skb pool to each target instead of
using a global one.

This approach allows for more fine-grained control over memory
allocation and deallocation, ensuring that resources are only allocated
and retained as needed.

Signed-off-by: Breno Leitao <leitao@debian.org>
---
 include/linux/netpoll.h |  1 +
 net/core/netpoll.c      | 29 +++++++++++------------------
 2 files changed, 12 insertions(+), 18 deletions(-)

Comments

Jakub Kicinski Nov. 1, 2024, 1:28 a.m. UTC | #1
On Fri, 25 Oct 2024 07:20:19 -0700 Breno Leitao wrote:
> The current implementation of the netpoll system uses a global skb pool,
> which can lead to inefficient memory usage and waste when targets are
> disabled or no longer in use.
> 
> This can result in a significant amount of memory being unnecessarily
> allocated and retained, potentially causing performance issues and
> limiting the availability of resources for other system components.
> 
> Modify the netpoll system to assign a skb pool to each target instead of
> using a global one.
> 
> This approach allows for more fine-grained control over memory
> allocation and deallocation, ensuring that resources are only allocated
> and retained as needed.

If memory consumption is a concern then having n pools for n targets
rather than one seems even worse? 

Is it not better to flush the pool when last target gets disabled?
Breno Leitao Nov. 1, 2024, 11:56 a.m. UTC | #2
On Thu, Oct 31, 2024 at 06:28:57PM -0700, Jakub Kicinski wrote:
> On Fri, 25 Oct 2024 07:20:19 -0700 Breno Leitao wrote:
> > The current implementation of the netpoll system uses a global skb pool,
> > which can lead to inefficient memory usage and waste when targets are
> > disabled or no longer in use.
> > 
> > This can result in a significant amount of memory being unnecessarily
> > allocated and retained, potentially causing performance issues and
> > limiting the availability of resources for other system components.
> > 
> > Modify the netpoll system to assign a skb pool to each target instead of
> > using a global one.
> > 
> > This approach allows for more fine-grained control over memory
> > allocation and deallocation, ensuring that resources are only allocated
> > and retained as needed.
> 
> If memory consumption is a concern then having n pools for n targets
> rather than one seems even worse? 
> 
> Is it not better to flush the pool when last target gets disabled?

That is an option as well, we can create a refcount and flush the pool
when it reaches to zero. This will require some core reoganization due
to the way the buffer are initialized (at early initi), but, totally
doable. In fact, this is how I started this patchset.

On the other side, it seems a better design to have a pool per user, and
they are independent and not sharing the same pool.

In fact, if the pool is per-user, we can make the whole netpoll
transmission faster, just dequeing a skb from the pool instead of trying
to allocate an fresh skb.
diff mbox series

Patch

diff --git a/include/linux/netpoll.h b/include/linux/netpoll.h
index cd4e28db0cbd..77635b885c18 100644
--- a/include/linux/netpoll.h
+++ b/include/linux/netpoll.h
@@ -32,6 +32,7 @@  struct netpoll {
 	bool ipv6;
 	u16 local_port, remote_port;
 	u8 remote_mac[ETH_ALEN];
+	struct sk_buff_head skb_pool;
 };
 
 struct netpoll_info {
diff --git a/net/core/netpoll.c b/net/core/netpoll.c
index e83fd8bdce36..c9a9e37e2d74 100644
--- a/net/core/netpoll.c
+++ b/net/core/netpoll.c
@@ -45,9 +45,6 @@ 
 
 #define MAX_UDP_CHUNK 1460
 #define MAX_SKBS 32
-
-static struct sk_buff_head skb_pool;
-
 #define USEC_PER_POLL	50
 
 #define MAX_SKB_SIZE							\
@@ -234,20 +231,23 @@  void netpoll_poll_enable(struct net_device *dev)
 		up(&ni->dev_lock);
 }
 
-static void refill_skbs(void)
+static void refill_skbs(struct netpoll *np)
 {
+	struct sk_buff_head *skb_pool;
 	struct sk_buff *skb;
 	unsigned long flags;
 
-	spin_lock_irqsave(&skb_pool.lock, flags);
-	while (skb_pool.qlen < MAX_SKBS) {
+	skb_pool = &np->skb_pool;
+
+	spin_lock_irqsave(&skb_pool->lock, flags);
+	while (skb_pool->qlen < MAX_SKBS) {
 		skb = alloc_skb(MAX_SKB_SIZE, GFP_ATOMIC);
 		if (!skb)
 			break;
 
-		__skb_queue_tail(&skb_pool, skb);
+		__skb_queue_tail(skb_pool, skb);
 	}
-	spin_unlock_irqrestore(&skb_pool.lock, flags);
+	spin_unlock_irqrestore(&skb_pool->lock, flags);
 }
 
 static void zap_completion_queue(void)
@@ -284,12 +284,12 @@  static struct sk_buff *find_skb(struct netpoll *np, int len, int reserve)
 	struct sk_buff *skb;
 
 	zap_completion_queue();
-	refill_skbs();
+	refill_skbs(np);
 repeat:
 
 	skb = alloc_skb(len, GFP_ATOMIC);
 	if (!skb)
-		skb = skb_dequeue(&skb_pool);
+		skb = skb_dequeue(&np->skb_pool);
 
 	if (!skb) {
 		if (++count < 10) {
@@ -778,7 +778,7 @@  int netpoll_setup(struct netpoll *np)
 	rtnl_unlock();
 
 	/* fill up the skb queue */
-	refill_skbs();
+	refill_skbs(np);
 	return 0;
 
 put:
@@ -792,13 +792,6 @@  int netpoll_setup(struct netpoll *np)
 }
 EXPORT_SYMBOL(netpoll_setup);
 
-static int __init netpoll_init(void)
-{
-	skb_queue_head_init(&skb_pool);
-	return 0;
-}
-core_initcall(netpoll_init);
-
 static void rcu_cleanup_netpoll_info(struct rcu_head *rcu_head)
 {
 	struct netpoll_info *npinfo =