diff mbox series

[net-next,3/6] mm/page_alloc: use initial zero offset for page_frag_alloc_align()

Message ID 20240103095650.25769-4-linyunsheng@huawei.com (mailing list archive)
State Changes Requested
Delegated to: Netdev Maintainers
Headers show
Series remove page frag implementation in vhost_net | expand

Checks

Context Check Description
netdev/series_format success Posting correctly formatted
netdev/tree_selection success Clearly marked for net-next
netdev/ynl success Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 1120 this patch: 1120
netdev/cc_maintainers success CCed 2 of 2 maintainers
netdev/build_clang success Errors and warnings before: 1141 this patch: 1141
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 1146 this patch: 1146
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 67 lines checked
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Yunsheng Lin Jan. 3, 2024, 9:56 a.m. UTC
The next patch is above to use page_frag_alloc_align() to
replace vhost_net_page_frag_refill(), the main difference
between those two frag page implementations is whether we
use a initial zero offset or not.

It seems more nature to use a initial zero offset, as it
may enable more correct cache prefetching and skb frag
coalescing in the networking, so change it to use initial
zero offset.

Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
CC: Alexander Duyck <alexander.duyck@gmail.com>
---
 mm/page_alloc.c | 30 ++++++++++++++----------------
 1 file changed, 14 insertions(+), 16 deletions(-)

Comments

Alexander Duyck Jan. 5, 2024, 3:42 p.m. UTC | #1
On Wed, 2024-01-03 at 17:56 +0800, Yunsheng Lin wrote:
> The next patch is above to use page_frag_alloc_align() to
> replace vhost_net_page_frag_refill(), the main difference
> between those two frag page implementations is whether we
> use a initial zero offset or not.
> 
> It seems more nature to use a initial zero offset, as it
> may enable more correct cache prefetching and skb frag
> coalescing in the networking, so change it to use initial
> zero offset.
> 
> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
> CC: Alexander Duyck <alexander.duyck@gmail.com>

There are several advantages to running the offset as a countdown
rather than count-up value.

1. Specifically for the size of the chunks we are allocating doing it
from the bottom up doesn't add any value as we are jumping in large
enough amounts and are being used for DMA so being sequential doesn't
add any value.

2. By starting at the end and working toward zero we can use built in
functionality of the CPU to only have to check and see if our result
would be signed rather than having to load two registers with the
values and then compare them which saves us a few cycles. In addition
it saves us from having to read both the size and the offset for every
page.

Again this is another code cleanup at the cost of performance. I
realize many of the items you are removing would be considered micro-
optimizations but when we are dealing with millions of packets per
second those optimizations add up.
Yunsheng Lin Jan. 8, 2024, 8:59 a.m. UTC | #2
On 2024/1/5 23:42, Alexander H Duyck wrote:
> On Wed, 2024-01-03 at 17:56 +0800, Yunsheng Lin wrote:
>> The next patch is above to use page_frag_alloc_align() to
>> replace vhost_net_page_frag_refill(), the main difference
>> between those two frag page implementations is whether we
>> use a initial zero offset or not.
>>
>> It seems more nature to use a initial zero offset, as it
>> may enable more correct cache prefetching and skb frag
>> coalescing in the networking, so change it to use initial
>> zero offset.
>>
>> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
>> CC: Alexander Duyck <alexander.duyck@gmail.com>
> 
> There are several advantages to running the offset as a countdown
> rather than count-up value.
> 
> 1. Specifically for the size of the chunks we are allocating doing it
> from the bottom up doesn't add any value as we are jumping in large
> enough amounts and are being used for DMA so being sequential doesn't
> add any value.

What is the expected size of the above chunks in your mind? I suppose
that is like NAPI_HAS_SMALL_PAGE_FRAG to avoid excessive truesize
underestimation?

It seems there is no limit for min size of allocation for
page_frag_alloc_align() now, and as the page_frag API seems to be only
used in networking, should we enforce the min size of allocation at API
level?

> 
> 2. By starting at the end and working toward zero we can use built in
> functionality of the CPU to only have to check and see if our result
> would be signed rather than having to load two registers with the
> values and then compare them which saves us a few cycles. In addition
> it saves us from having to read both the size and the offset for every
> page.

I suppose the above is ok if we only use the page_frag_alloc*() API to
allocate memory for skb->data, not for the frag in skb_shinfo(), as by
starting at the end and working toward zero, it means we can not do skb
coalescing.

As page_frag_alloc*() is returning va now, I am assuming most of users
is using the API for skb->data, I guess it is ok to drop this patch for
now.

If we allow page_frag_alloc*() to return struct page, we might need this
patch to enable coalescing.

> 
> Again this is another code cleanup at the cost of performance. I
> realize many of the items you are removing would be considered micro-
> optimizations but when we are dealing with millions of packets per
> second those optimizations add up.
> .
>
Alexander Duyck Jan. 8, 2024, 4:25 p.m. UTC | #3
On Mon, Jan 8, 2024 at 12:59 AM Yunsheng Lin <linyunsheng@huawei.com> wrote:
>
> On 2024/1/5 23:42, Alexander H Duyck wrote:
> > On Wed, 2024-01-03 at 17:56 +0800, Yunsheng Lin wrote:
> >> The next patch is above to use page_frag_alloc_align() to
> >> replace vhost_net_page_frag_refill(), the main difference
> >> between those two frag page implementations is whether we
> >> use a initial zero offset or not.
> >>
> >> It seems more nature to use a initial zero offset, as it
> >> may enable more correct cache prefetching and skb frag
> >> coalescing in the networking, so change it to use initial
> >> zero offset.
> >>
> >> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
> >> CC: Alexander Duyck <alexander.duyck@gmail.com>
> >
> > There are several advantages to running the offset as a countdown
> > rather than count-up value.
> >
> > 1. Specifically for the size of the chunks we are allocating doing it
> > from the bottom up doesn't add any value as we are jumping in large
> > enough amounts and are being used for DMA so being sequential doesn't
> > add any value.
>
> What is the expected size of the above chunks in your mind? I suppose
> that is like NAPI_HAS_SMALL_PAGE_FRAG to avoid excessive truesize
> underestimation?
>
> It seems there is no limit for min size of allocation for
> page_frag_alloc_align() now, and as the page_frag API seems to be only
> used in networking, should we enforce the min size of allocation at API
> level?

The primary use case for this is to allocate fragments to be used for
storing network data. We usually end up allocating a minimum of 1K in
most cases as you end up having to reserve something like 512B for
headroom and the skb_shared_info in an skb. In addition the slice
lengths become very hard to predict as these are usually used for
network traffic so the size can be as small as 60B for a packet or as
large as 9KB

> >
> > 2. By starting at the end and working toward zero we can use built in
> > functionality of the CPU to only have to check and see if our result
> > would be signed rather than having to load two registers with the
> > values and then compare them which saves us a few cycles. In addition
> > it saves us from having to read both the size and the offset for every
> > page.
>
> I suppose the above is ok if we only use the page_frag_alloc*() API to
> allocate memory for skb->data, not for the frag in skb_shinfo(), as by
> starting at the end and working toward zero, it means we can not do skb
> coalescing.
>
> As page_frag_alloc*() is returning va now, I am assuming most of users
> is using the API for skb->data, I guess it is ok to drop this patch for
> now.
>
> If we allow page_frag_alloc*() to return struct page, we might need this
> patch to enable coalescing.

I would argue this is not the interface for enabling coalescing. This
is one of the reasons why this is implemented the way it is. When you
are aligning fragments you aren't going to be able to coalesce the
frames anyway as the alignment would push the fragments apart.
Yunsheng Lin Jan. 9, 2024, 11:22 a.m. UTC | #4
On 2024/1/9 0:25, Alexander Duyck wrote:
> On Mon, Jan 8, 2024 at 12:59 AM Yunsheng Lin <linyunsheng@huawei.com> wrote:

...

> 
>>>
>>> 2. By starting at the end and working toward zero we can use built in
>>> functionality of the CPU to only have to check and see if our result
>>> would be signed rather than having to load two registers with the
>>> values and then compare them which saves us a few cycles. In addition
>>> it saves us from having to read both the size and the offset for every
>>> page.
>>
>> I suppose the above is ok if we only use the page_frag_alloc*() API to
>> allocate memory for skb->data, not for the frag in skb_shinfo(), as by
>> starting at the end and working toward zero, it means we can not do skb
>> coalescing.
>>
>> As page_frag_alloc*() is returning va now, I am assuming most of users
>> is using the API for skb->data, I guess it is ok to drop this patch for
>> now.
>>
>> If we allow page_frag_alloc*() to return struct page, we might need this
>> patch to enable coalescing.
> 
> I would argue this is not the interface for enabling coalescing. This
> is one of the reasons why this is implemented the way it is. When you
> are aligning fragments you aren't going to be able to coalesce the
> frames anyway as the alignment would push the fragments apart.

It seems the alignment requirement is the same for the same user of a page_frag
instance, so the aligning does not seem to be a problem for coalescing?

> .
>
Alexander Duyck Jan. 9, 2024, 3:37 p.m. UTC | #5
On Tue, Jan 9, 2024 at 3:22 AM Yunsheng Lin <linyunsheng@huawei.com> wrote:
>
> On 2024/1/9 0:25, Alexander Duyck wrote:
> > On Mon, Jan 8, 2024 at 12:59 AM Yunsheng Lin <linyunsheng@huawei.com> wrote:
>
> ...
>
> >
> >>>
> >>> 2. By starting at the end and working toward zero we can use built in
> >>> functionality of the CPU to only have to check and see if our result
> >>> would be signed rather than having to load two registers with the
> >>> values and then compare them which saves us a few cycles. In addition
> >>> it saves us from having to read both the size and the offset for every
> >>> page.
> >>
> >> I suppose the above is ok if we only use the page_frag_alloc*() API to
> >> allocate memory for skb->data, not for the frag in skb_shinfo(), as by
> >> starting at the end and working toward zero, it means we can not do skb
> >> coalescing.
> >>
> >> As page_frag_alloc*() is returning va now, I am assuming most of users
> >> is using the API for skb->data, I guess it is ok to drop this patch for
> >> now.
> >>
> >> If we allow page_frag_alloc*() to return struct page, we might need this
> >> patch to enable coalescing.
> >
> > I would argue this is not the interface for enabling coalescing. This
> > is one of the reasons why this is implemented the way it is. When you
> > are aligning fragments you aren't going to be able to coalesce the
> > frames anyway as the alignment would push the fragments apart.
>
> It seems the alignment requirement is the same for the same user of a page_frag
> instance, so the aligning does not seem to be a problem for coalescing?

I'm a bit confused as to what coalescing you are referring to. If you
can provide a link it would be useful.

The problem is page_frag is a very generic item and can be generated
from a regular page on NICs that can internally reuse the same page
instance for multiple buffers. So it is possible to coalesce page
frags, however it is very unlikely to be coalescing them in the case
of them being used for skb buffers since it would require aligned
payloads on the network in order to really make it work without
hardware intervention of some sort and on such devices they are likely
allocating entire pages instead of page frags for the buffers.
Yunsheng Lin Jan. 10, 2024, 9:45 a.m. UTC | #6
On 2024/1/9 23:37, Alexander Duyck wrote:
> On Tue, Jan 9, 2024 at 3:22 AM Yunsheng Lin <linyunsheng@huawei.com> wrote:
>>
>> On 2024/1/9 0:25, Alexander Duyck wrote:
>>> On Mon, Jan 8, 2024 at 12:59 AM Yunsheng Lin <linyunsheng@huawei.com> wrote:
>>
>> ...
>>
>>>
>>>>>
>>>>> 2. By starting at the end and working toward zero we can use built in
>>>>> functionality of the CPU to only have to check and see if our result
>>>>> would be signed rather than having to load two registers with the
>>>>> values and then compare them which saves us a few cycles. In addition
>>>>> it saves us from having to read both the size and the offset for every
>>>>> page.
>>>>
>>>> I suppose the above is ok if we only use the page_frag_alloc*() API to
>>>> allocate memory for skb->data, not for the frag in skb_shinfo(), as by
>>>> starting at the end and working toward zero, it means we can not do skb
>>>> coalescing.
>>>>
>>>> As page_frag_alloc*() is returning va now, I am assuming most of users
>>>> is using the API for skb->data, I guess it is ok to drop this patch for
>>>> now.
>>>>
>>>> If we allow page_frag_alloc*() to return struct page, we might need this
>>>> patch to enable coalescing.
>>>
>>> I would argue this is not the interface for enabling coalescing. This
>>> is one of the reasons why this is implemented the way it is. When you
>>> are aligning fragments you aren't going to be able to coalesce the
>>> frames anyway as the alignment would push the fragments apart.
>>
>> It seems the alignment requirement is the same for the same user of a page_frag
>> instance, so the aligning does not seem to be a problem for coalescing?
> 
> I'm a bit confused as to what coalescing you are referring to. If you
> can provide a link it would be useful.
> 
> The problem is page_frag is a very generic item and can be generated
> from a regular page on NICs that can internally reuse the same page
> instance for multiple buffers. So it is possible to coalesce page
> frags, however it is very unlikely to be coalescing them in the case
> of them being used for skb buffers since it would require aligned
> payloads on the network in order to really make it work without
> hardware intervention of some sort and on such devices they are likely
> allocating entire pages instead of page frags for the buffers.

The main usecase in my mind is the page_frag used in the tx part for
networking if we are able to unify the page_frag and page_frag_cache in
the future:
https://elixir.bootlin.com/linux/v6.7-rc8/source/net/ipv4/tcp.c#L1183

Do you think if it makes sense to unify them using below unified struct,
and provide API for returning 'page' and 'va' as page_pool does now?
It may mean we need to add one pointer to the new struct and are not able
do some trick for performance, I suppose that is ok as there are always
some trade off for maintainability and evolvability?

struct page_frag {
        struct *page;
	void *va;
#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
	__u16 offset;
	__u16 size;
#else
	__u32 offset;
#endif
	/* we maintain a pagecount bias, so that we dont dirty cache line
	 * containing page->_refcount every time we allocate a fragment.
	 */
	unsigned int		pagecnt_bias;
	bool pfmemalloc;
};


Another usecase that is not really related is: hw may be configured with
a small BD buf size, for 2K and configured with a big mtu size or have
hw gro enabled, for 4K pagesize, that means we may be able to reduce the
number of the frag num to half as it is usually the case that two
consecutive BD pointing to the same page. I implemented a POC in hns3
long time ago using the frag implememtation in page_pool, it did show
some obvious peformance gain, But as the priority shifts, I have not
been able to continue that POC yet.

> 
> .
>
Alexander Duyck Jan. 10, 2024, 4:21 p.m. UTC | #7
On Wed, Jan 10, 2024 at 1:45 AM Yunsheng Lin <linyunsheng@huawei.com> wrote:
>
> On 2024/1/9 23:37, Alexander Duyck wrote:
> > On Tue, Jan 9, 2024 at 3:22 AM Yunsheng Lin <linyunsheng@huawei.com> wrote:
> >>
> >> On 2024/1/9 0:25, Alexander Duyck wrote:
> >>> On Mon, Jan 8, 2024 at 12:59 AM Yunsheng Lin <linyunsheng@huawei.com> wrote:
> >>
> >> ...
> >>
> >>>
> >>>>>
> >>>>> 2. By starting at the end and working toward zero we can use built in
> >>>>> functionality of the CPU to only have to check and see if our result
> >>>>> would be signed rather than having to load two registers with the
> >>>>> values and then compare them which saves us a few cycles. In addition
> >>>>> it saves us from having to read both the size and the offset for every
> >>>>> page.
> >>>>
> >>>> I suppose the above is ok if we only use the page_frag_alloc*() API to
> >>>> allocate memory for skb->data, not for the frag in skb_shinfo(), as by
> >>>> starting at the end and working toward zero, it means we can not do skb
> >>>> coalescing.
> >>>>
> >>>> As page_frag_alloc*() is returning va now, I am assuming most of users
> >>>> is using the API for skb->data, I guess it is ok to drop this patch for
> >>>> now.
> >>>>
> >>>> If we allow page_frag_alloc*() to return struct page, we might need this
> >>>> patch to enable coalescing.
> >>>
> >>> I would argue this is not the interface for enabling coalescing. This
> >>> is one of the reasons why this is implemented the way it is. When you
> >>> are aligning fragments you aren't going to be able to coalesce the
> >>> frames anyway as the alignment would push the fragments apart.
> >>
> >> It seems the alignment requirement is the same for the same user of a page_frag
> >> instance, so the aligning does not seem to be a problem for coalescing?
> >
> > I'm a bit confused as to what coalescing you are referring to. If you
> > can provide a link it would be useful.
> >
> > The problem is page_frag is a very generic item and can be generated
> > from a regular page on NICs that can internally reuse the same page
> > instance for multiple buffers. So it is possible to coalesce page
> > frags, however it is very unlikely to be coalescing them in the case
> > of them being used for skb buffers since it would require aligned
> > payloads on the network in order to really make it work without
> > hardware intervention of some sort and on such devices they are likely
> > allocating entire pages instead of page frags for the buffers.
>
> The main usecase in my mind is the page_frag used in the tx part for
> networking if we are able to unify the page_frag and page_frag_cache in
> the future:
> https://elixir.bootlin.com/linux/v6.7-rc8/source/net/ipv4/tcp.c#L1183
>
> Do you think if it makes sense to unify them using below unified struct,
> and provide API for returning 'page' and 'va' as page_pool does now?

Short answer is no. The difference between the two is the use case,
and combining page and va in the same struct just ends up generating
indirect data duplication. So one step would be to look at seeing what
we could do to either convert page to va or va to page without taking
a significant penalty in either page_frag or page_frag_cache use case.
I had opted for using the virtual address as the Rx path has a strong
need for accessing the memory as soon as it is written to begin basic
parsing tasks and the like. In addition it is usually cheaper to go
from a virtual to a page rather than the other way around.

As for the rest of the fields we have essentially 2 main use cases.
The first is the Rx path which usually implies DMA and not knowing
what size of the incoming frame is and the need to have allocation
succeed to avoid jamming up a device. So basically it is always doing
something like allocating 2K although it may only receive somewhere
between 60B to 1514B, and it is always allocating from reserves. For
something like Tx keeping the pagecnt_bias and pfmemalloc values
doesn't make much sense as neither is really needed for the Tx use
case. Instead they can just make calls to page_get as they slice off
pieces of the page, and they have the option of failing if they cannot
get enough memory to put the skb together.

> It may mean we need to add one pointer to the new struct and are not able
> do some trick for performance, I suppose that is ok as there are always
> some trade off for maintainability and evolvability?
>
> struct page_frag {
>         struct *page;
>         void *va;
> #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
>         __u16 offset;
>         __u16 size;
> #else
>         __u32 offset;
> #endif
>         /* we maintain a pagecount bias, so that we dont dirty cache line
>          * containing page->_refcount every time we allocate a fragment.
>          */
>         unsigned int            pagecnt_bias;
>         bool pfmemalloc;
> };

My general thought was to instead just make the page_frag a member of
the page_frag_cache since those two blocks would be the same. Then we
could see how we evolve things from there. By doing that there should
essentially be no code change

> Another usecase that is not really related is: hw may be configured with
> a small BD buf size, for 2K and configured with a big mtu size or have
> hw gro enabled, for 4K pagesize, that means we may be able to reduce the
> number of the frag num to half as it is usually the case that two
> consecutive BD pointing to the same page. I implemented a POC in hns3
> long time ago using the frag implememtation in page_pool, it did show
> some obvious peformance gain, But as the priority shifts, I have not
> been able to continue that POC yet.

The main issue for that use case is that in order to make it work you
are having to usually copybreak the headers out of the page frags as
you won't be able to save the space for the skb tailroom. Either that
or you are using header/data split and in that case the data portion
should really be written to full pages instead of page fragments
anyway and be making use of page pool instead.
Yunsheng Lin Jan. 11, 2024, 12:37 p.m. UTC | #8
On 2024/1/11 0:21, Alexander Duyck wrote:

...

>>
>> The main usecase in my mind is the page_frag used in the tx part for
>> networking if we are able to unify the page_frag and page_frag_cache in
>> the future:
>> https://elixir.bootlin.com/linux/v6.7-rc8/source/net/ipv4/tcp.c#L1183
>>
>> Do you think if it makes sense to unify them using below unified struct,
>> and provide API for returning 'page' and 'va' as page_pool does now?
> 
> Short answer is no. The difference between the two is the use case,
> and combining page and va in the same struct just ends up generating
> indirect data duplication. So one step would be to look at seeing what
> we could do to either convert page to va or va to page without taking
> a significant penalty in either page_frag or page_frag_cache use case.

I think we might do something like the page_pool using some unused fields
in 'struct page' as the metadata of page_frag/page_frag_cache, and reduce
page_frag or page_frag_cache to a single pointer of 'struct page'?

I looked the the fields used by page_pool in 'struct page', it seems it is
enough for page_frag case too.

> I had opted for using the virtual address as the Rx path has a strong
> need for accessing the memory as soon as it is written to begin basic
> parsing tasks and the like. In addition it is usually cheaper to go
> from a virtual to a page rather than the other way around.

Is there a reason why it is usually cheaper to go from a virtual to a page
rather than the other way around? I looked the implementations of them, But
had not figured why yet.

> 
> As for the rest of the fields we have essentially 2 main use cases.
> The first is the Rx path which usually implies DMA and not knowing
> what size of the incoming frame is and the need to have allocation
> succeed to avoid jamming up a device. So basically it is always doing
> something like allocating 2K although it may only receive somewhere
> between 60B to 1514B, and it is always allocating from reserves. For
> something like Tx keeping the pagecnt_bias and pfmemalloc values

I am not so sure I understand why it is ok to keep reusing a pfmemalloc
page for Tx yet?

I am assuming the pfmemalloc is not about a specific page, but about the
state of mm system when a page is allocated under memory pressure?
The pfmemalloc is used to drop some rx packet which is not helpful for
reducing the memory pressure? And tx does not need to handle the pfmemalloc
case?

> doesn't make much sense as neither is really needed for the Tx use
> case. Instead they can just make calls to page_get as they slice off

I think for small packet, the bias may help to avoid some atomic
operations and some cache bouncing?

> pieces of the page, and they have the option of failing if they cannot
> get enough memory to put the skb together.
> 
>> It may mean we need to add one pointer to the new struct and are not able
>> do some trick for performance, I suppose that is ok as there are always
>> some trade off for maintainability and evolvability?
>>
>> struct page_frag {
>>         struct *page;
>>         void *va;
>> #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
>>         __u16 offset;
>>         __u16 size;
>> #else
>>         __u32 offset;
>> #endif
>>         /* we maintain a pagecount bias, so that we dont dirty cache line
>>          * containing page->_refcount every time we allocate a fragment.
>>          */
>>         unsigned int            pagecnt_bias;
>>         bool pfmemalloc;
>> };
> 
> My general thought was to instead just make the page_frag a member of
> the page_frag_cache since those two blocks would be the same. Then we
> could see how we evolve things from there. By doing that there should
> essentially be no code change

Yes, that is a possible way to evolve things. But I seems to perfer to
use the unused fields in 'struct page' for now, WDYT?

> 
>> Another usecase that is not really related is: hw may be configured with
>> a small BD buf size, for 2K and configured with a big mtu size or have
>> hw gro enabled, for 4K pagesize, that means we may be able to reduce the
>> number of the frag num to half as it is usually the case that two
>> consecutive BD pointing to the same page. I implemented a POC in hns3
>> long time ago using the frag implememtation in page_pool, it did show
>> some obvious peformance gain, But as the priority shifts, I have not
>> been able to continue that POC yet.
> 
> The main issue for that use case is that in order to make it work you
> are having to usually copybreak the headers out of the page frags as
> you won't be able to save the space for the skb tailroom. Either that
> or you are using header/data split and in that case the data portion
> should really be written to full pages instead of page fragments

I am not sure how hw implement the header/data split yet, is the hw able
to not enabling header/data split for small packets and enabling header/data
split for medium/big packets for the same queue, maybe spaning data part on
multi-bd for big packets, so that we have least memory usages and performance
penalty for small/medium/big packets received in the same queue?

> anyway and be making use of page pool instead.
diff mbox series

Patch

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 1f0b36dd81b5..083e0c38fb62 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4720,7 +4720,7 @@  void *page_frag_alloc_align(struct page_frag_cache *nc,
 		      unsigned int fragsz, gfp_t gfp_mask,
 		      unsigned int align)
 {
-	unsigned int size = PAGE_SIZE;
+	unsigned int size;
 	struct page *page;
 	int offset;
 
@@ -4732,10 +4732,6 @@  void *page_frag_alloc_align(struct page_frag_cache *nc,
 		if (!page)
 			return NULL;
 
-#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
-		/* if size can vary use size else just use PAGE_SIZE */
-		size = nc->size;
-#endif
 		/* Even if we own the page, we do not use atomic_set().
 		 * This would break get_page_unless_zero() users.
 		 */
@@ -4744,11 +4740,18 @@  void *page_frag_alloc_align(struct page_frag_cache *nc,
 		/* reset page count bias and offset to start of new frag */
 		nc->pfmemalloc = page_is_pfmemalloc(page);
 		nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
-		nc->offset = size;
+		nc->offset = 0;
 	}
 
-	offset = nc->offset - fragsz;
-	if (unlikely(offset < 0)) {
+#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
+	/* if size can vary use size else just use PAGE_SIZE */
+	size = nc->size;
+#else
+	size = PAGE_SIZE;
+#endif
+
+	offset = ALIGN(nc->offset, align);
+	if (unlikely(offset + fragsz > size)) {
 		page = virt_to_page(nc->va);
 
 		if (!page_ref_sub_and_test(page, nc->pagecnt_bias))
@@ -4759,17 +4762,13 @@  void *page_frag_alloc_align(struct page_frag_cache *nc,
 			goto refill;
 		}
 
-#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
-		/* if size can vary use size else just use PAGE_SIZE */
-		size = nc->size;
-#endif
 		/* OK, page count is 0, we can safely set it */
 		set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1);
 
 		/* reset page count bias and offset to start of new frag */
 		nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
-		offset = size - fragsz;
-		if (unlikely(offset < 0)) {
+		offset = 0;
+		if (unlikely(fragsz > size)) {
 			/*
 			 * The caller is trying to allocate a fragment
 			 * with fragsz > PAGE_SIZE but the cache isn't big
@@ -4784,8 +4783,7 @@  void *page_frag_alloc_align(struct page_frag_cache *nc,
 	}
 
 	nc->pagecnt_bias--;
-	offset &= -align;
-	nc->offset = offset;
+	nc->offset = offset + fragsz;
 
 	return nc->va + offset;
 }