diff mbox series

[net-next,v8,07/17] net: page_pool: introduce page_pool_mp_return_in_cache

Message ID 20241204172204.4180482-8-dw@davidwei.uk (mailing list archive)
State New
Headers show
Series io_uring zero copy rx | expand

Commit Message

David Wei Dec. 4, 2024, 5:21 p.m. UTC
From: Pavel Begunkov <asml.silence@gmail.com>

Add a helper that allows a page pool memory provider to efficiently
return a netmem off the allocation callback.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: David Wei <dw@davidwei.uk>
---
 include/net/page_pool/memory_provider.h |  4 ++++
 net/core/page_pool.c                    | 19 +++++++++++++++++++
 2 files changed, 23 insertions(+)

Comments

Mina Almasry Dec. 9, 2024, 5:15 p.m. UTC | #1
On Wed, Dec 4, 2024 at 9:22 AM David Wei <dw@davidwei.uk> wrote:
>
> From: Pavel Begunkov <asml.silence@gmail.com>
>
> Add a helper that allows a page pool memory provider to efficiently
> return a netmem off the allocation callback.
>
> Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
> Signed-off-by: David Wei <dw@davidwei.uk>
> ---
>  include/net/page_pool/memory_provider.h |  4 ++++
>  net/core/page_pool.c                    | 19 +++++++++++++++++++
>  2 files changed, 23 insertions(+)
>
> diff --git a/include/net/page_pool/memory_provider.h b/include/net/page_pool/memory_provider.h
> index 83d7eec0058d..352b3a35d31c 100644
> --- a/include/net/page_pool/memory_provider.h
> +++ b/include/net/page_pool/memory_provider.h
> @@ -1,3 +1,5 @@
> +/* SPDX-License-Identifier: GPL-2.0-or-later */
> +
>  #ifndef _NET_PAGE_POOL_MEMORY_PROVIDER_H
>  #define _NET_PAGE_POOL_MEMORY_PROVIDER_H
>
> @@ -7,4 +9,6 @@ int page_pool_mp_init_paged_area(struct page_pool *pool,
>  void page_pool_mp_release_area(struct page_pool *pool,
>                                 struct net_iov_area *area);
>
> +void page_pool_mp_return_in_cache(struct page_pool *pool, netmem_ref netmem);
> +
>  #endif
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index d17e536ba8b8..24f29bdd70ab 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -1213,3 +1213,22 @@ void page_pool_mp_release_area(struct page_pool *pool,
>                 page_pool_release_page_dma(pool, net_iov_to_netmem(niov));
>         }
>  }
> +
> +/*
> + * page_pool_mp_return_in_cache() - return a netmem to the allocation cache.
> + * @pool:      pool from which pages were allocated
> + * @netmem:    netmem to return
> + *
> + * Return already allocated and accounted netmem to the page pool's allocation
> + * cache. The function doesn't provide synchronisation and must only be called
> + * from the napi context.
> + */
> +void page_pool_mp_return_in_cache(struct page_pool *pool, netmem_ref netmem)
> +{
> +       if (WARN_ON_ONCE(pool->alloc.count >= PP_ALLOC_CACHE_REFILL))
> +               return;
> +

Really the caller needs to check this, and if the caller is checking
it then this additional check is unnecessarily defensive I would say.
But not really a big deal. I think I gave this feedback on the
previous iteration.

Reviewed-by: Mina Almasry <almasrymina@google.com>
Pavel Begunkov Dec. 9, 2024, 5:28 p.m. UTC | #2
On 12/9/24 17:15, Mina Almasry wrote:
> On Wed, Dec 4, 2024 at 9:22 AM David Wei <dw@davidwei.uk> wrote:
...
>> +/*
>> + * page_pool_mp_return_in_cache() - return a netmem to the allocation cache.
>> + * @pool:      pool from which pages were allocated
>> + * @netmem:    netmem to return
>> + *
>> + * Return already allocated and accounted netmem to the page pool's allocation
>> + * cache. The function doesn't provide synchronisation and must only be called
>> + * from the napi context.
>> + */
>> +void page_pool_mp_return_in_cache(struct page_pool *pool, netmem_ref netmem)
>> +{
>> +       if (WARN_ON_ONCE(pool->alloc.count >= PP_ALLOC_CACHE_REFILL))
>> +               return;
>> +
> 
> Really the caller needs to check this, and if the caller is checking
> it then this additional check is unnecessarily defensive I would say.
> But not really a big deal. I think I gave this feedback on the
> previous iteration.

I think I already killed it. Nevertheless, that's true, the caller
has to check it, which is why it's a warning.

> Reviewed-by: Mina Almasry <almasrymina@google.com>
Jakub Kicinski Dec. 10, 2024, 3:40 a.m. UTC | #3
On Wed,  4 Dec 2024 09:21:46 -0800 David Wei wrote:
> +/*
> + * page_pool_mp_return_in_cache() - return a netmem to the allocation cache.
> + * @pool:	pool from which pages were allocated
> + * @netmem:	netmem to return
> + *
> + * Return already allocated and accounted netmem to the page pool's allocation
> + * cache. The function doesn't provide synchronisation and must only be called
> + * from the napi context.

NAPI is irrelevant, this helper, IIUC, has to be called down the call
chain from mp_ops->alloc().

> + */
> +void page_pool_mp_return_in_cache(struct page_pool *pool, netmem_ref netmem)
> +{
> +	if (WARN_ON_ONCE(pool->alloc.count >= PP_ALLOC_CACHE_REFILL))
> +		return;

I'd

	return false;

without a warning.

> +	page_pool_dma_sync_for_device(pool, netmem, -1);
> +	page_pool_fragment_netmem(netmem, 1);
> +	pool->alloc.cache[pool->alloc.count++] = netmem;

and here:

	return true;

this say mps can use return value as a stop condition in a do {} while()
loop, without having to duplicate the check.

	do {
		netmem = alloc...
		... logic;
	} while (page_pool_mp_alloc_refill(pp, netmem));

	/* last netmem didn't fit in the cache */
	return netmem;
Pavel Begunkov Dec. 10, 2024, 4:31 a.m. UTC | #4
On 12/10/24 03:40, Jakub Kicinski wrote:
> On Wed,  4 Dec 2024 09:21:46 -0800 David Wei wrote:
>> +/*
>> + * page_pool_mp_return_in_cache() - return a netmem to the allocation cache.
>> + * @pool:	pool from which pages were allocated
>> + * @netmem:	netmem to return
>> + *
>> + * Return already allocated and accounted netmem to the page pool's allocation
>> + * cache. The function doesn't provide synchronisation and must only be called
>> + * from the napi context.
> 
> NAPI is irrelevant, this helper, IIUC, has to be called down the call
> chain from mp_ops->alloc().
> 
>> + */
>> +void page_pool_mp_return_in_cache(struct page_pool *pool, netmem_ref netmem)
>> +{
>> +	if (WARN_ON_ONCE(pool->alloc.count >= PP_ALLOC_CACHE_REFILL))
>> +		return;
> 
> I'd
> 
> 	return false;
> 
> without a warning.
> 
>> +	page_pool_dma_sync_for_device(pool, netmem, -1);
>> +	page_pool_fragment_netmem(netmem, 1);
>> +	pool->alloc.cache[pool->alloc.count++] = netmem;
> 
> and here:
> 
> 	return true;
> 
> this say mps can use return value as a stop condition in a do {} while()
> loop, without having to duplicate the check.
> 
> 	do {
> 		netmem = alloc...
> 		... logic;
> 	} while (page_pool_mp_alloc_refill(pp, netmem));
> 
> 	/* last netmem didn't fit in the cache */
> 	return netmem;

That last netmem is a problem. Returning it is not a bad option,
but it doesn't feel right. Providers should rather converge to
one way of returning buffers and batching here is needed.

I'd rather prefer this one then

while (pp_has_space()) {
	netmem = alloc();
	pp_push(netmem);
}

Any thoughts on that?
Jakub Kicinski Dec. 11, 2024, 12:06 a.m. UTC | #5
On Tue, 10 Dec 2024 04:31:53 +0000 Pavel Begunkov wrote:
> >> +	page_pool_dma_sync_for_device(pool, netmem, -1);
> >> +	page_pool_fragment_netmem(netmem, 1);
> >> +	pool->alloc.cache[pool->alloc.count++] = netmem;  
> > 
> > and here:
> > 
> > 	return true;
> > 
> > this say mps can use return value as a stop condition in a do {} while()
> > loop, without having to duplicate the check.
> > 
> > 	do {
> > 		netmem = alloc...
> > 		... logic;
> > 	} while (page_pool_mp_alloc_refill(pp, netmem));
> > 
> > 	/* last netmem didn't fit in the cache */
> > 	return netmem;  
> 
> That last netmem is a problem. Returning it is not a bad option,
> but it doesn't feel right. Providers should rather converge to
> one way of returning buffers and batching here is needed.
> 
> I'd rather prefer this one then
> 
> while (pp_has_space()) {
> 	netmem = alloc();
> 	pp_push(netmem);
> }
> 
> Any thoughts on that?

No strong preference. If I was the one who placed the ->alloc() call
the way it is placed it's probably because it saves us a conditional,
it saves us checking if the cache is empty. 

If we want to have the page pool pick the last object out of the cache
I'd lean towards avoiding all the helpers. Make the callback:

 void alloc_netmem(struct page_pool *pool, gfp_t gfp,
                   size_t cnt, netmem_ref *arr);

you get an @arr(ay) to fill up with up to @cnt elements. Mirroring
alloc_pages_bulk_array_node() (down to its, to me at least, unusual
ordering of arguments).
diff mbox series

Patch

diff --git a/include/net/page_pool/memory_provider.h b/include/net/page_pool/memory_provider.h
index 83d7eec0058d..352b3a35d31c 100644
--- a/include/net/page_pool/memory_provider.h
+++ b/include/net/page_pool/memory_provider.h
@@ -1,3 +1,5 @@ 
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+
 #ifndef _NET_PAGE_POOL_MEMORY_PROVIDER_H
 #define _NET_PAGE_POOL_MEMORY_PROVIDER_H
 
@@ -7,4 +9,6 @@  int page_pool_mp_init_paged_area(struct page_pool *pool,
 void page_pool_mp_release_area(struct page_pool *pool,
 				struct net_iov_area *area);
 
+void page_pool_mp_return_in_cache(struct page_pool *pool, netmem_ref netmem);
+
 #endif
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index d17e536ba8b8..24f29bdd70ab 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -1213,3 +1213,22 @@  void page_pool_mp_release_area(struct page_pool *pool,
 		page_pool_release_page_dma(pool, net_iov_to_netmem(niov));
 	}
 }
+
+/*
+ * page_pool_mp_return_in_cache() - return a netmem to the allocation cache.
+ * @pool:	pool from which pages were allocated
+ * @netmem:	netmem to return
+ *
+ * Return already allocated and accounted netmem to the page pool's allocation
+ * cache. The function doesn't provide synchronisation and must only be called
+ * from the napi context.
+ */
+void page_pool_mp_return_in_cache(struct page_pool *pool, netmem_ref netmem)
+{
+	if (WARN_ON_ONCE(pool->alloc.count >= PP_ALLOC_CACHE_REFILL))
+		return;
+
+	page_pool_dma_sync_for_device(pool, netmem, -1);
+	page_pool_fragment_netmem(netmem, 1);
+	pool->alloc.cache[pool->alloc.count++] = netmem;
+}