Message ID | 20250108220644.3528845-7-dw@davidwei.uk (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | io_uring zero copy rx | expand |
On Wed, 8 Jan 2025 14:06:27 -0800 David Wei wrote: > From: Jakub Kicinski <kuba@kernel.org> > > A spin off from the original page pool memory providers patch by Jakub, > which allows extending page pools with custom allocators. One of such > providers is devmem TCP, and the other is io_uring zerocopy added in > following patches. > > Suggested-by: Jakub Kicinski <kuba@kernel.org> Something odd with authorship here. You list me as author (From) but didn't add my SoB. Maybe add something like "Based on earlier work by Jakub" to the commit and reset the tags? Or the Suggested-by is just for the warning on ops not being built in? > Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> > Signed-off-by: David Wei <dw@davidwei.uk> > --- > include/net/page_pool/memory_provider.h | 20 ++++++++++++++++++++ > include/net/page_pool/types.h | 4 ++++ > net/core/devmem.c | 15 ++++++++++++++- > net/core/page_pool.c | 23 +++++++++++++++-------- > 4 files changed, 53 insertions(+), 9 deletions(-) > create mode 100644 include/net/page_pool/memory_provider.h > > diff --git a/include/net/page_pool/memory_provider.h b/include/net/page_pool/memory_provider.h > new file mode 100644 > index 000000000000..79412a8714fa > --- /dev/null > +++ b/include/net/page_pool/memory_provider.h > @@ -0,0 +1,20 @@ > +/* SPDX-License-Identifier: GPL-2.0 > + * > + * page_pool/memory_provider.h > + * Author: Pavel Begunkov <asml.silence@gmail.com> > + * Author: David Wei <dw@davidwei.uk> Not a customary thing in networking to list authors in comments. > + */ > +#ifndef _NET_PAGE_POOL_MEMORY_PROVIDER_H > +#define _NET_PAGE_POOL_MEMORY_PROVIDER_H > + > +#include <net/netmem.h> > +#include <net/page_pool/types.h> No need? All you need is forward declarations for types at this stage. > +struct memory_provider_ops { > + netmem_ref (*alloc_netmems)(struct page_pool *pool, gfp_t gfp); > + bool (*release_netmem)(struct page_pool *pool, netmem_ref netmem); > + int (*init)(struct page_pool *pool); > + void (*destroy)(struct page_pool *pool); > +}; > + > +#endif Rest LGTM.
On 1/16/25 00:44, Jakub Kicinski wrote: > On Wed, 8 Jan 2025 14:06:27 -0800 David Wei wrote: >> From: Jakub Kicinski <kuba@kernel.org> >> >> A spin off from the original page pool memory providers patch by Jakub, >> which allows extending page pools with custom allocators. One of such >> providers is devmem TCP, and the other is io_uring zerocopy added in >> following patches. >> >> Suggested-by: Jakub Kicinski <kuba@kernel.org> > > Something odd with authorship here. You list me as author (From) > but didn't add my SoB. Maybe add something like "Based on > earlier work by Jakub" to the commit and reset the tags? The intention was to change the author (failed) and put it as suggested-by since you said before you don't care and changes pile up, and even modification notes got deleted for unknown to me reasons in v9. Based-on, Co-authored also sound good if you have a preference. > Or the Suggested-by is just for the warning on ops not being built in? > >> Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> >> Signed-off-by: David Wei <dw@davidwei.uk> >> --- >> include/net/page_pool/memory_provider.h | 20 ++++++++++++++++++++ >> include/net/page_pool/types.h | 4 ++++ >> net/core/devmem.c | 15 ++++++++++++++- >> net/core/page_pool.c | 23 +++++++++++++++-------- >> 4 files changed, 53 insertions(+), 9 deletions(-) >> create mode 100644 include/net/page_pool/memory_provider.h >> >> diff --git a/include/net/page_pool/memory_provider.h b/include/net/page_pool/memory_provider.h >> new file mode 100644 >> index 000000000000..79412a8714fa >> --- /dev/null >> +++ b/include/net/page_pool/memory_provider.h >> @@ -0,0 +1,20 @@ >> +/* SPDX-License-Identifier: GPL-2.0 >> + * >> + * page_pool/memory_provider.h >> + * Author: Pavel Begunkov <asml.silence@gmail.com> >> + * Author: David Wei <dw@davidwei.uk> > > Not a customary thing in networking to list authors in comments. I'm not used to that, but _all_ networking files I quickly looked through before adding that had authors listed including some new ones. Maybe I was just too lucky. I can kill it. >> + */ >> +#ifndef _NET_PAGE_POOL_MEMORY_PROVIDER_H >> +#define _NET_PAGE_POOL_MEMORY_PROVIDER_H >> + >> +#include <net/netmem.h> >> +#include <net/page_pool/types.h> > > No need? All you need is forward declarations for types at this stage. That's getting extremely nit picky. I don't see why it's preferable forward declaring all structures instead of including a "types.h" file. Not like that matters or there is a dependency problem. Even from your comments to v18, the list would likely need to grow with an mp_params declaration. >> +struct memory_provider_ops { >> + netmem_ref (*alloc_netmems)(struct page_pool *pool, gfp_t gfp); >> + bool (*release_netmem)(struct page_pool *pool, netmem_ref netmem); >> + int (*init)(struct page_pool *pool); >> + void (*destroy)(struct page_pool *pool); >> +}; >> + >> +#endif > > Rest LGTM.
diff --git a/include/net/page_pool/memory_provider.h b/include/net/page_pool/memory_provider.h new file mode 100644 index 000000000000..79412a8714fa --- /dev/null +++ b/include/net/page_pool/memory_provider.h @@ -0,0 +1,20 @@ +/* SPDX-License-Identifier: GPL-2.0 + * + * page_pool/memory_provider.h + * Author: Pavel Begunkov <asml.silence@gmail.com> + * Author: David Wei <dw@davidwei.uk> + */ +#ifndef _NET_PAGE_POOL_MEMORY_PROVIDER_H +#define _NET_PAGE_POOL_MEMORY_PROVIDER_H + +#include <net/netmem.h> +#include <net/page_pool/types.h> + +struct memory_provider_ops { + netmem_ref (*alloc_netmems)(struct page_pool *pool, gfp_t gfp); + bool (*release_netmem)(struct page_pool *pool, netmem_ref netmem); + int (*init)(struct page_pool *pool); + void (*destroy)(struct page_pool *pool); +}; + +#endif diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index ed4cd114180a..88f65c3e2ad9 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -152,8 +152,11 @@ struct page_pool_stats { */ #define PAGE_POOL_FRAG_GROUP_ALIGN (4 * sizeof(long)) +struct memory_provider_ops; + struct pp_memory_provider_params { void *mp_priv; + const struct memory_provider_ops *mp_ops; }; struct page_pool { @@ -216,6 +219,7 @@ struct page_pool { struct ptr_ring ring; void *mp_priv; + const struct memory_provider_ops *mp_ops; #ifdef CONFIG_PAGE_POOL_STATS /* recycle stats are per-cpu to avoid locking */ diff --git a/net/core/devmem.c b/net/core/devmem.c index c250db6993d3..48833c1dcbd4 100644 --- a/net/core/devmem.c +++ b/net/core/devmem.c @@ -15,6 +15,7 @@ #include <net/netdev_queues.h> #include <net/netdev_rx_queue.h> #include <net/page_pool/helpers.h> +#include <net/page_pool/memory_provider.h> #include <trace/events/page_pool.h> #include "devmem.h" @@ -26,6 +27,8 @@ /* Protected by rtnl_lock() */ static DEFINE_XARRAY_FLAGS(net_devmem_dmabuf_bindings, XA_FLAGS_ALLOC1); +static const struct memory_provider_ops dmabuf_devmem_ops; + static void net_devmem_dmabuf_free_chunk_owner(struct gen_pool *genpool, struct gen_pool_chunk *chunk, void *not_used) @@ -117,6 +120,7 @@ void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding) WARN_ON(rxq->mp_params.mp_priv != binding); rxq->mp_params.mp_priv = NULL; + rxq->mp_params.mp_ops = NULL; rxq_idx = get_netdev_rx_queue_index(rxq); @@ -142,7 +146,7 @@ int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, } rxq = __netif_get_rx_queue(dev, rxq_idx); - if (rxq->mp_params.mp_priv) { + if (rxq->mp_params.mp_ops) { NL_SET_ERR_MSG(extack, "designated queue already memory provider bound"); return -EEXIST; } @@ -160,6 +164,7 @@ int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, return err; rxq->mp_params.mp_priv = binding; + rxq->mp_params.mp_ops = &dmabuf_devmem_ops; err = netdev_rx_queue_restart(dev, rxq_idx); if (err) @@ -169,6 +174,7 @@ int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, err_xa_erase: rxq->mp_params.mp_priv = NULL; + rxq->mp_params.mp_ops = NULL; xa_erase(&binding->bound_rxqs, xa_idx); return err; @@ -388,3 +394,10 @@ bool mp_dmabuf_devmem_release_page(struct page_pool *pool, netmem_ref netmem) /* We don't want the page pool put_page()ing our net_iovs. */ return false; } + +static const struct memory_provider_ops dmabuf_devmem_ops = { + .init = mp_dmabuf_devmem_init, + .destroy = mp_dmabuf_devmem_destroy, + .alloc_netmems = mp_dmabuf_devmem_alloc_netmems, + .release_netmem = mp_dmabuf_devmem_release_page, +}; diff --git a/net/core/page_pool.c b/net/core/page_pool.c index fc3a04823087..0c5da8c056ec 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -13,6 +13,7 @@ #include <net/netdev_rx_queue.h> #include <net/page_pool/helpers.h> +#include <net/page_pool/memory_provider.h> #include <net/xdp.h> #include <linux/dma-direction.h> @@ -285,13 +286,19 @@ static int page_pool_init(struct page_pool *pool, rxq = __netif_get_rx_queue(pool->slow.netdev, pool->slow.queue_idx); pool->mp_priv = rxq->mp_params.mp_priv; + pool->mp_ops = rxq->mp_params.mp_ops; } - if (pool->mp_priv) { + if (pool->mp_ops) { if (!pool->dma_map || !pool->dma_sync) return -EOPNOTSUPP; - err = mp_dmabuf_devmem_init(pool); + if (WARN_ON(!is_kernel_rodata((unsigned long)pool->mp_ops))) { + err = -EFAULT; + goto free_ptr_ring; + } + + err = pool->mp_ops->init(pool); if (err) { pr_warn("%s() mem-provider init failed %d\n", __func__, err); @@ -588,8 +595,8 @@ netmem_ref page_pool_alloc_netmems(struct page_pool *pool, gfp_t gfp) return netmem; /* Slow-path: cache empty, do real allocation */ - if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_priv) - netmem = mp_dmabuf_devmem_alloc_netmems(pool, gfp); + if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_ops) + netmem = pool->mp_ops->alloc_netmems(pool, gfp); else netmem = __page_pool_alloc_pages_slow(pool, gfp); return netmem; @@ -696,8 +703,8 @@ void page_pool_return_page(struct page_pool *pool, netmem_ref netmem) bool put; put = true; - if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_priv) - put = mp_dmabuf_devmem_release_page(pool, netmem); + if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_ops) + put = pool->mp_ops->release_netmem(pool, netmem); else __page_pool_release_page_dma(pool, netmem); @@ -1065,8 +1072,8 @@ static void __page_pool_destroy(struct page_pool *pool) page_pool_unlist(pool); page_pool_uninit(pool); - if (pool->mp_priv) { - mp_dmabuf_devmem_destroy(pool); + if (pool->mp_ops) { + pool->mp_ops->destroy(pool); static_branch_dec(&page_pool_mem_providers); }