diff mbox series

[v2,03/24] page_pool: Add netmem_set_dma_addr() and netmem_get_dma_addr()

Message ID 20230105214631.3939268-4-willy@infradead.org (mailing list archive)
State Superseded
Delegated to: Netdev Maintainers
Headers show
Series Split netmem from struct page | expand

Checks

Context Check Description
bpf/vmtest-bpf-next-VM_Test-10 success Logs for test_maps on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-12 success Logs for test_maps on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-13 success Logs for test_maps on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-14 success Logs for test_progs on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-15 success Logs for test_progs on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-17 fail Logs for test_progs on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-18 fail Logs for test_progs on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-19 success Logs for test_progs_no_alu32 on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-20 success Logs for test_progs_no_alu32 on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-21 success Logs for test_progs_no_alu32 on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-22 fail Logs for test_progs_no_alu32 on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-23 fail Logs for test_progs_no_alu32 on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-24 success Logs for test_progs_no_alu32_parallel on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-25 success Logs for test_progs_no_alu32_parallel on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-27 success Logs for test_progs_no_alu32_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-28 success Logs for test_progs_no_alu32_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-29 success Logs for test_progs_parallel on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-30 success Logs for test_progs_parallel on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-31 success Logs for test_progs_parallel on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-32 success Logs for test_progs_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-33 success Logs for test_progs_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-34 success Logs for test_verifier on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-35 success Logs for test_verifier on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-36 success Logs for test_verifier on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-37 success Logs for test_verifier on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-38 success Logs for test_verifier on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-16 success Logs for test_progs on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-26 success Logs for test_progs_no_alu32_parallel on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-11 success Logs for test_maps on s390x with gcc
bpf/vmtest-bpf-next-PR fail PR summary
bpf/vmtest-bpf-next-VM_Test-1 success Logs for ${{ matrix.test }} on ${{ matrix.arch }} with ${{ matrix.toolchain }}
bpf/vmtest-bpf-next-VM_Test-2 success Logs for ShellCheck
bpf/vmtest-bpf-next-VM_Test-3 success Logs for build for aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-4 success Logs for build for aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-5 fail Logs for build for s390x with gcc
bpf/vmtest-bpf-next-VM_Test-6 success Logs for build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-7 success Logs for build for x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-8 success Logs for llvm-toolchain
bpf/vmtest-bpf-next-VM_Test-9 success Logs for set-matrix
netdev/tree_selection success Guessed tree name to be net-next
netdev/fixes_present success Fixes tag not required for -next series
netdev/subject_prefix warning Target tree name not specified in the subject
netdev/cover_letter success Series has a cover letter
netdev/patch_count fail Series longer than 15 patches (and no cover letter)
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 5340 this patch: 5340
netdev/cc_maintainers warning 4 maintainers not CCed: edumazet@google.com davem@davemloft.net pabeni@redhat.com kuba@kernel.org
netdev/build_clang success Errors and warnings before: 1086 this patch: 1086
netdev/module_param success Was 0 now: 0
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 5554 this patch: 5554
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 37 lines checked
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Matthew Wilcox (Oracle) Jan. 5, 2023, 9:46 p.m. UTC
Turn page_pool_set_dma_addr() and page_pool_get_dma_addr() into
wrappers.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/net/page_pool.h | 22 ++++++++++++++++------
 1 file changed, 16 insertions(+), 6 deletions(-)

Comments

Jesper Dangaard Brouer Jan. 6, 2023, 1:43 p.m. UTC | #1
On 05/01/2023 22.46, Matthew Wilcox (Oracle) wrote:
> Turn page_pool_set_dma_addr() and page_pool_get_dma_addr() into
> wrappers.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>   include/net/page_pool.h | 22 ++++++++++++++++------
>   1 file changed, 16 insertions(+), 6 deletions(-)

Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Ilias Apalodimas Jan. 9, 2023, 5:30 p.m. UTC | #2
Hi Matthew

On Thu, 5 Jan 2023 at 23:46, Matthew Wilcox (Oracle)
<willy@infradead.org> wrote:
>
> Turn page_pool_set_dma_addr() and page_pool_get_dma_addr() into
> wrappers.
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>  include/net/page_pool.h | 22 ++++++++++++++++------
>  1 file changed, 16 insertions(+), 6 deletions(-)
>
> diff --git a/include/net/page_pool.h b/include/net/page_pool.h
> index 84b4ea8af015..196b585763d9 100644
> --- a/include/net/page_pool.h
> +++ b/include/net/page_pool.h
> @@ -449,21 +449,31 @@ static inline void page_pool_recycle_direct(struct page_pool *pool,
>  #define PAGE_POOL_DMA_USE_PP_FRAG_COUNT        \
>                 (sizeof(dma_addr_t) > sizeof(unsigned long))
>
> -static inline dma_addr_t page_pool_get_dma_addr(struct page *page)
> +static inline dma_addr_t netmem_get_dma_addr(struct netmem *nmem)

Ideally, we'd like to avoid having people call these directly and use
the page_pool_(get|set)_dma_addr wrappers.  Can we add a comment in
v3?

>  {
> -       dma_addr_t ret = page->dma_addr;
> +       dma_addr_t ret = nmem->dma_addr;
>
>         if (PAGE_POOL_DMA_USE_PP_FRAG_COUNT)
> -               ret |= (dma_addr_t)page->dma_addr_upper << 16 << 16;
> +               ret |= (dma_addr_t)nmem->dma_addr_upper << 16 << 16;
>
>         return ret;
>  }
>
> -static inline void page_pool_set_dma_addr(struct page *page, dma_addr_t addr)
> +static inline dma_addr_t page_pool_get_dma_addr(struct page *page)
> +{
> +       return netmem_get_dma_addr(page_netmem(page));
> +}
> +
> +static inline void netmem_set_dma_addr(struct netmem *nmem, dma_addr_t addr)
>  {
> -       page->dma_addr = addr;
> +       nmem->dma_addr = addr;
>         if (PAGE_POOL_DMA_USE_PP_FRAG_COUNT)
> -               page->dma_addr_upper = upper_32_bits(addr);
> +               nmem->dma_addr_upper = upper_32_bits(addr);
> +}
> +
> +static inline void page_pool_set_dma_addr(struct page *page, dma_addr_t addr)
> +{
> +       netmem_set_dma_addr(page_netmem(page), addr);
>  }
>
>  static inline bool is_page_pool_compiled_in(void)
> --
> 2.35.1
>

Reviewed-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Ilias Apalodimas Jan. 10, 2023, 9:17 a.m. UTC | #3
On Mon, 9 Jan 2023 at 19:30, Ilias Apalodimas
<ilias.apalodimas@linaro.org> wrote:
>
> Hi Matthew
>
> On Thu, 5 Jan 2023 at 23:46, Matthew Wilcox (Oracle)
> <willy@infradead.org> wrote:
> >
> > Turn page_pool_set_dma_addr() and page_pool_get_dma_addr() into
> > wrappers.
> >
> > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> > ---
> >  include/net/page_pool.h | 22 ++++++++++++++++------
> >  1 file changed, 16 insertions(+), 6 deletions(-)
> >
> > diff --git a/include/net/page_pool.h b/include/net/page_pool.h
> > index 84b4ea8af015..196b585763d9 100644
> > --- a/include/net/page_pool.h
> > +++ b/include/net/page_pool.h
> > @@ -449,21 +449,31 @@ static inline void page_pool_recycle_direct(struct page_pool *pool,
> >  #define PAGE_POOL_DMA_USE_PP_FRAG_COUNT        \
> >                 (sizeof(dma_addr_t) > sizeof(unsigned long))
> >
> > -static inline dma_addr_t page_pool_get_dma_addr(struct page *page)
> > +static inline dma_addr_t netmem_get_dma_addr(struct netmem *nmem)
>
> Ideally, we'd like to avoid having people call these directly and use
> the page_pool_(get|set)_dma_addr wrappers.  Can we add a comment in
> v3?

Ignore this, I just saw the changes in mlx5.  This is fine as is

>
> >  {
> > -       dma_addr_t ret = page->dma_addr;
> > +       dma_addr_t ret = nmem->dma_addr;
> >
> >         if (PAGE_POOL_DMA_USE_PP_FRAG_COUNT)
> > -               ret |= (dma_addr_t)page->dma_addr_upper << 16 << 16;
> > +               ret |= (dma_addr_t)nmem->dma_addr_upper << 16 << 16;
> >
> >         return ret;
> >  }
> >
> > -static inline void page_pool_set_dma_addr(struct page *page, dma_addr_t addr)
> > +static inline dma_addr_t page_pool_get_dma_addr(struct page *page)
> > +{
> > +       return netmem_get_dma_addr(page_netmem(page));
> > +}
> > +
> > +static inline void netmem_set_dma_addr(struct netmem *nmem, dma_addr_t addr)
> >  {
> > -       page->dma_addr = addr;
> > +       nmem->dma_addr = addr;
> >         if (PAGE_POOL_DMA_USE_PP_FRAG_COUNT)
> > -               page->dma_addr_upper = upper_32_bits(addr);
> > +               nmem->dma_addr_upper = upper_32_bits(addr);
> > +}
> > +
> > +static inline void page_pool_set_dma_addr(struct page *page, dma_addr_t addr)
> > +{
> > +       netmem_set_dma_addr(page_netmem(page), addr);
> >  }
> >
> >  static inline bool is_page_pool_compiled_in(void)
> > --
> > 2.35.1
> >
>
> Reviewed-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Matthew Wilcox (Oracle) Jan. 10, 2023, 6:15 p.m. UTC | #4
On Mon, Jan 09, 2023 at 07:30:30PM +0200, Ilias Apalodimas wrote:
> Hi Matthew

Hey Ilias.  Thanks for all the review!

> > -static inline dma_addr_t page_pool_get_dma_addr(struct page *page)
> > +static inline dma_addr_t netmem_get_dma_addr(struct netmem *nmem)
> 
> Ideally, we'd like to avoid having people call these directly and use
> the page_pool_(get|set)_dma_addr wrappers.  Can we add a comment in
> v3?

I don't think this is what we want.  Currently drivers call
page_pool_get_dma_addr() on pages that are presumably from the page
pool, but the compiler isn't going to help them out if they just
get the struct page from somewhere random.  They'll get garbage and
presumably crash.

By returning a netmem pointer from page_pool, we help drivers ensure
that they're only passing around memory that was actually allocated
from the page_pool and so they won't get garbage if they pass it to
netmem_get_dma_addr().  The page_pool_get_dma_addr() wrapper is
a temporary measure until we have all the drivers converted to
use netmem alone.

Does that all make sense, or have I misunderstood what you wanted
from a comment?
Matthew Wilcox (Oracle) Jan. 10, 2023, 6:16 p.m. UTC | #5
On Tue, Jan 10, 2023 at 11:17:35AM +0200, Ilias Apalodimas wrote:
> Ignore this, I just saw the changes in mlx5.  This is fine as is

I should read everything before replying ;-)
diff mbox series

Patch

diff --git a/include/net/page_pool.h b/include/net/page_pool.h
index 84b4ea8af015..196b585763d9 100644
--- a/include/net/page_pool.h
+++ b/include/net/page_pool.h
@@ -449,21 +449,31 @@  static inline void page_pool_recycle_direct(struct page_pool *pool,
 #define PAGE_POOL_DMA_USE_PP_FRAG_COUNT	\
 		(sizeof(dma_addr_t) > sizeof(unsigned long))
 
-static inline dma_addr_t page_pool_get_dma_addr(struct page *page)
+static inline dma_addr_t netmem_get_dma_addr(struct netmem *nmem)
 {
-	dma_addr_t ret = page->dma_addr;
+	dma_addr_t ret = nmem->dma_addr;
 
 	if (PAGE_POOL_DMA_USE_PP_FRAG_COUNT)
-		ret |= (dma_addr_t)page->dma_addr_upper << 16 << 16;
+		ret |= (dma_addr_t)nmem->dma_addr_upper << 16 << 16;
 
 	return ret;
 }
 
-static inline void page_pool_set_dma_addr(struct page *page, dma_addr_t addr)
+static inline dma_addr_t page_pool_get_dma_addr(struct page *page)
+{
+	return netmem_get_dma_addr(page_netmem(page));
+}
+
+static inline void netmem_set_dma_addr(struct netmem *nmem, dma_addr_t addr)
 {
-	page->dma_addr = addr;
+	nmem->dma_addr = addr;
 	if (PAGE_POOL_DMA_USE_PP_FRAG_COUNT)
-		page->dma_addr_upper = upper_32_bits(addr);
+		nmem->dma_addr_upper = upper_32_bits(addr);
+}
+
+static inline void page_pool_set_dma_addr(struct page *page, dma_addr_t addr)
+{
+	netmem_set_dma_addr(page_netmem(page), addr);
 }
 
 static inline bool is_page_pool_compiled_in(void)