diff mbox series

[RFC,v3,05/20] net: page_pool: add ->scrub mem provider callback

Message ID 20231219210357.4029713-6-dw@davidwei.uk (mailing list archive)
State RFC
Delegated to: Netdev Maintainers
Headers show
Series Zero copy Rx using io_uring | expand

Checks

Context Check Description
netdev/tree_selection success Guessing tree name failed - patch did not apply, async

Commit Message

David Wei Dec. 19, 2023, 9:03 p.m. UTC
From: Pavel Begunkov <asml.silence@gmail.com>

page pool is now waiting for all ppiovs to return before destroying
itself, and for that to happen the memory provider might need to push
some buffers, flush caches and so on.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: David Wei <dw@davidwei.uk>
---
 include/net/page_pool/types.h | 1 +
 net/core/page_pool.c          | 2 ++
 2 files changed, 3 insertions(+)
diff mbox series

Patch

diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h
index a701310b9811..fd846cac9fb6 100644
--- a/include/net/page_pool/types.h
+++ b/include/net/page_pool/types.h
@@ -134,6 +134,7 @@  enum pp_memory_provider_type {
 struct pp_memory_provider_ops {
 	int (*init)(struct page_pool *pool);
 	void (*destroy)(struct page_pool *pool);
+	void (*scrub)(struct page_pool *pool);
 	struct page *(*alloc_pages)(struct page_pool *pool, gfp_t gfp);
 	bool (*release_page)(struct page_pool *pool, struct page *page);
 };
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 71af9835638e..9e3073d61a97 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -947,6 +947,8 @@  static int page_pool_release(struct page_pool *pool)
 {
 	int inflight;
 
+	if (pool->mp_ops && pool->mp_ops->scrub)
+		pool->mp_ops->scrub(pool);
 	page_pool_scrub(pool);
 	inflight = page_pool_inflight(pool);
 	if (!inflight)