diff mbox series

[net-next,1/2] mm: add dma_addr_t to struct page

Message ID 154990120685.24530.15350136329514629029.stgit@firesoul (mailing list archive)
State New, archived
Headers show
Series Fix page_pool API and dma address storage | expand

Commit Message

Jesper Dangaard Brouer Feb. 11, 2019, 4:06 p.m. UTC
The page_pool API is using page->private to store DMA addresses.
As pointed out by David Miller we can't use that on 32-bit architectures
with 64-bit DMA

This patch adds a new dma_addr_t struct to allow storing DMA addresses

Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
---
 include/linux/mm_types.h |    8 ++++++++
 1 file changed, 8 insertions(+)

Comments

Matthew Wilcox (Oracle) Feb. 11, 2019, 4:55 p.m. UTC | #1
On Mon, Feb 11, 2019 at 05:06:46PM +0100, Jesper Dangaard Brouer wrote:
> The page_pool API is using page->private to store DMA addresses.
> As pointed out by David Miller we can't use that on 32-bit architectures
> with 64-bit DMA
> 
> This patch adds a new dma_addr_t struct to allow storing DMA addresses
> 
> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>

Reviewed-by: Matthew Wilcox <willy@infradead.org>

> +		struct {	/* page_pool used by netstack */
> +			/**
> +			 * @dma_addr: Page_pool need to store DMA-addr, and

s/need/needs/

> +			 * cannot use @private, as DMA-mappings can be 64-bit

s/DMA-mappings/DMA addresses/

> +			 * even on 32-bit Architectures.

s/A/a/

> +			 */
> +			dma_addr_t dma_addr; /* Shares area with @lru */

It also shares with @slab_list, @next, @compound_head, @pgmap and
@rcu_head.  I think it's pointless to try to document which other fields
something shares space with; the places which do it are a legacy from
before I rearranged struct page last year.  Anyone looking at this should
now be able to see "Oh, this is a union, only use the fields which are
in the union for the type of struct page I have here".

Are the pages allocated from this API ever supposed to be mapped to
userspace?

You also say in the documentation:

 * If no DMA mapping is done, then it can act as shim-layer that
 * fall-through to alloc_page.  As no state is kept on the page, the
 * regular put_page() call is sufficient.

I think this is probably a dangerous precedent to set.  Better to require
exactly one call to page_pool_put_page() (with the understanding that the
refcount may be elevated, so this may not be the final free of the page,
but the page will no longer be usable for its page_pool purpose).
Andrew Morton Feb. 11, 2019, 8:16 p.m. UTC | #2
On Mon, 11 Feb 2019 17:06:46 +0100 Jesper Dangaard Brouer <brouer@redhat.com> wrote:

> The page_pool API is using page->private to store DMA addresses.
> As pointed out by David Miller we can't use that on 32-bit architectures
> with 64-bit DMA
> 
> This patch adds a new dma_addr_t struct to allow storing DMA addresses
> 
> ..
>
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -95,6 +95,14 @@ struct page {
>  			 */
>  			unsigned long private;
>  		};
> +		struct {	/* page_pool used by netstack */
> +			/**
> +			 * @dma_addr: Page_pool need to store DMA-addr, and
> +			 * cannot use @private, as DMA-mappings can be 64-bit
> +			 * even on 32-bit Architectures.
> +			 */

This comment is a bit awkward.  The discussion about why it doesn't use
->private is uninteresting going forward and is more material for a
changelog.

How about

			/**
			 * @dma_addr: page_pool requires a 64-bit value even on
			 * 32-bit architectures.
			 */

Otherwise,

Acked-by: Andrew Morton <akpm@linux-foundation.org>
Jesper Dangaard Brouer Feb. 12, 2019, 8:28 a.m. UTC | #3
On Mon, 11 Feb 2019 12:16:24 -0800
Andrew Morton <akpm@linux-foundation.org> wrote:

> On Mon, 11 Feb 2019 17:06:46 +0100 Jesper Dangaard Brouer <brouer@redhat.com> wrote:
> 
> > The page_pool API is using page->private to store DMA addresses.
> > As pointed out by David Miller we can't use that on 32-bit architectures
> > with 64-bit DMA
> > 
> > This patch adds a new dma_addr_t struct to allow storing DMA addresses
> > 
> > ..
> >
> > --- a/include/linux/mm_types.h
> > +++ b/include/linux/mm_types.h
> > @@ -95,6 +95,14 @@ struct page {
> >  			 */
> >  			unsigned long private;
> >  		};
> > +		struct {	/* page_pool used by netstack */
> > +			/**
> > +			 * @dma_addr: Page_pool need to store DMA-addr, and
> > +			 * cannot use @private, as DMA-mappings can be 64-bit
> > +			 * even on 32-bit Architectures.
> > +			 */  
> 
> This comment is a bit awkward.  The discussion about why it doesn't use
> ->private is uninteresting going forward and is more material for a  
> changelog.
> 
> How about
> 
> 			/**
> 			 * @dma_addr: page_pool requires a 64-bit value even on
> 			 * 32-bit architectures.
> 			 */

Much better, I'll use that!

> Otherwise,
> 
> Acked-by: Andrew Morton <akpm@linux-foundation.org>

Thanks!
Jesper Dangaard Brouer Feb. 12, 2019, 10:06 a.m. UTC | #4
On Mon, 11 Feb 2019 08:55:51 -0800
Matthew Wilcox <willy@infradead.org> wrote:

> On Mon, Feb 11, 2019 at 05:06:46PM +0100, Jesper Dangaard Brouer wrote:
> > The page_pool API is using page->private to store DMA addresses.
> > As pointed out by David Miller we can't use that on 32-bit architectures
> > with 64-bit DMA
> > 
> > This patch adds a new dma_addr_t struct to allow storing DMA addresses
> > 
> > Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> > Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>  
> 
> Reviewed-by: Matthew Wilcox <willy@infradead.org>
> 
> > +		struct {	/* page_pool used by netstack */
> > +			/**
> > +			 * @dma_addr: Page_pool need to store DMA-addr, and  
> 
> s/need/needs/
> 
> > +			 * cannot use @private, as DMA-mappings can be 64-bit  
> 
> s/DMA-mappings/DMA addresses/
> 
> > +			 * even on 32-bit Architectures.  
> 
> s/A/a/

Yes, that comments needs improvement. I think I'll use AKPMs suggestion.


> > +			 */
> > +			dma_addr_t dma_addr; /* Shares area with @lru */  
> 
> It also shares with @slab_list, @next, @compound_head, @pgmap and
> @rcu_head.  I think it's pointless to try to document which other fields
> something shares space with; the places which do it are a legacy from
> before I rearranged struct page last year.  Anyone looking at this should
> now be able to see "Oh, this is a union, only use the fields which are
> in the union for the type of struct page I have here".

I agree, I'll strip that comment.

 
> Are the pages allocated from this API ever supposed to be mapped to
> userspace?

I would like to know what fields on struct-page we cannot touch if we
want to keep this a possibility?

That said, I hope we don't need to do this. But as I integrate this
further into the netstack code, we might have to support this, or
at-least release the page_pool "state" (currently only DMA-addr) before
the skb_zcopy code path.  First iteration will not do zero-copy stuff,
and later I'll coordinate with Willem how to add this, if needed.

My general opinion is that if an end-user want to have pages mapped to
userspace, then page_pool (MEM_TYPE_PAGE_POOL) is not the right choice,
but instead use MEM_TYPE_ZERO_COPY (see enum xdp_mem_type).  We are
generally working towards allowing NIC drivers to have a different
memory type per RX-ring.


> You also say in the documentation:
> 
>  * If no DMA mapping is done, then it can act as shim-layer that
>  * fall-through to alloc_page.  As no state is kept on the page, the
>  * regular put_page() call is sufficient.
> 
> I think this is probably a dangerous precedent to set.  Better to require
> exactly one call to page_pool_put_page() (with the understanding that the
> refcount may be elevated, so this may not be the final free of the page,
> but the page will no longer be usable for its page_pool purpose).

Yes, this actually how it is implemented today, and the comment should
be improved.  Today __page_pool_put_page() in case of refcount is
elevated do call __page_pool_clean_page() to release page page_pool
state, and is in principle no longer "usable" for page_pool purposes.
BUT I have considered removing this, as it might not fit how want to
use the API. In our current RFC we found a need for (and introduced) a
page_pool_unmap_page() call (that call __page_pool_clean_page()), when
driver hits cases where the code path doesn't have a call-back to
page_pool_put_page() but instead end-up calling put_page().
diff mbox series

Patch

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 2c471a2c43fa..3060700752cc 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -95,6 +95,14 @@  struct page {
 			 */
 			unsigned long private;
 		};
+		struct {	/* page_pool used by netstack */
+			/**
+			 * @dma_addr: Page_pool need to store DMA-addr, and
+			 * cannot use @private, as DMA-mappings can be 64-bit
+			 * even on 32-bit Architectures.
+			 */
+			dma_addr_t dma_addr; /* Shares area with @lru */
+		};
 		struct {	/* slab, slob and slub */
 			union {
 				struct list_head slab_list;	/* uses lru */