Message ID | 20231208005250.2910004-2-almasrymina@google.com (mailing list archive) |
---|---|
State | Superseded |
Commit | c3f687d8dfeb33cffbb8f47c30002babfc4895d2 |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | Device Memory TCP | expand |
On Thu, Dec 07, 2023 at 04:52:32PM -0800, Mina Almasry wrote: > From: Jakub Kicinski <kuba@kernel.org> > > Releasing the DMA mapping will be useful for other types > of pages, so factor it out. Make sure compiler inlines it, > to avoid any regressions. > > Signed-off-by: Jakub Kicinski <kuba@kernel.org> > Signed-off-by: Mina Almasry <almasrymina@google.com> > Reviewed-by: Shakeel Butt <shakeelb@google.com>
On Fri, 8 Dec 2023 at 02:52, Mina Almasry <almasrymina@google.com> wrote: > > From: Jakub Kicinski <kuba@kernel.org> > > Releasing the DMA mapping will be useful for other types > of pages, so factor it out. Make sure compiler inlines it, > to avoid any regressions. > > Signed-off-by: Jakub Kicinski <kuba@kernel.org> > Signed-off-by: Mina Almasry <almasrymina@google.com> > > --- > > This is implemented by Jakub in his RFC: > > https://lore.kernel.org/netdev/f8270765-a27b-6ccf-33ea-cda097168d79@redhat.com/T/ > > I take no credit for the idea or implementation. This is a critical > dependency of device memory TCP and thus I'm pulling it into this series > to make it revewable and mergable. > > --- > net/core/page_pool.c | 25 ++++++++++++++++--------- > 1 file changed, 16 insertions(+), 9 deletions(-) > > diff --git a/net/core/page_pool.c b/net/core/page_pool.c > index c2e7c9a6efbe..ca1b3b65c9b5 100644 > --- a/net/core/page_pool.c > +++ b/net/core/page_pool.c > @@ -548,21 +548,16 @@ s32 page_pool_inflight(const struct page_pool *pool, bool strict) > return inflight; > } > > -/* Disconnects a page (from a page_pool). API users can have a need > - * to disconnect a page (from a page_pool), to allow it to be used as > - * a regular page (that will eventually be returned to the normal > - * page-allocator via put_page). > - */ > -static void page_pool_return_page(struct page_pool *pool, struct page *page) > +static __always_inline > +void __page_pool_release_page_dma(struct page_pool *pool, struct page *page) > { > dma_addr_t dma; > - int count; > > if (!(pool->p.flags & PP_FLAG_DMA_MAP)) > /* Always account for inflight pages, even if we didn't > * map them > */ > - goto skip_dma_unmap; > + return; > > dma = page_pool_get_dma_addr(page); > > @@ -571,7 +566,19 @@ static void page_pool_return_page(struct page_pool *pool, struct page *page) > PAGE_SIZE << pool->p.order, pool->p.dma_dir, > DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING); > page_pool_set_dma_addr(page, 0); > -skip_dma_unmap: > +} > + > +/* Disconnects a page (from a page_pool). API users can have a need > + * to disconnect a page (from a page_pool), to allow it to be used as > + * a regular page (that will eventually be returned to the normal > + * page-allocator via put_page). > + */ > +void page_pool_return_page(struct page_pool *pool, struct page *page) > +{ > + int count; > + > + __page_pool_release_page_dma(pool, page); > + > page_pool_clear_pp_info(page); > > /* This may be the last page returned, releasing the pool, so > -- > 2.43.0.472.g3155946c3a-goog > Reviewed-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
diff --git a/net/core/page_pool.c b/net/core/page_pool.c index c2e7c9a6efbe..ca1b3b65c9b5 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -548,21 +548,16 @@ s32 page_pool_inflight(const struct page_pool *pool, bool strict) return inflight; } -/* Disconnects a page (from a page_pool). API users can have a need - * to disconnect a page (from a page_pool), to allow it to be used as - * a regular page (that will eventually be returned to the normal - * page-allocator via put_page). - */ -static void page_pool_return_page(struct page_pool *pool, struct page *page) +static __always_inline +void __page_pool_release_page_dma(struct page_pool *pool, struct page *page) { dma_addr_t dma; - int count; if (!(pool->p.flags & PP_FLAG_DMA_MAP)) /* Always account for inflight pages, even if we didn't * map them */ - goto skip_dma_unmap; + return; dma = page_pool_get_dma_addr(page); @@ -571,7 +566,19 @@ static void page_pool_return_page(struct page_pool *pool, struct page *page) PAGE_SIZE << pool->p.order, pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING); page_pool_set_dma_addr(page, 0); -skip_dma_unmap: +} + +/* Disconnects a page (from a page_pool). API users can have a need + * to disconnect a page (from a page_pool), to allow it to be used as + * a regular page (that will eventually be returned to the normal + * page-allocator via put_page). + */ +void page_pool_return_page(struct page_pool *pool, struct page *page) +{ + int count; + + __page_pool_release_page_dma(pool, page); + page_pool_clear_pp_info(page); /* This may be the last page returned, releasing the pool, so