mbox series

[RFC,0/3] Introduce a bulk order-0 page allocator for sunrpc

Message ID 20210224102603.19524-1-mgorman@techsingularity.net (mailing list archive)
Headers show
Series Introduce a bulk order-0 page allocator for sunrpc | expand

Message

Mel Gorman Feb. 24, 2021, 10:26 a.m. UTC
This is a prototype series that introduces a bulk order-0 page allocator
with sunrpc being the first user. The implementation is not particularly
efficient and the intention is to iron out what the semantics of the API
should be. That said, sunrpc was reported to have reduced allocation
latency when refilling a pool.

As a side-note, while the implementation could be more efficient, it
would require fairly deep surgery in numerous places. The lock scope would
need to be significantly reduced, particularly as vmstat, per-cpu and the
buddy allocator have different locking protocol that overal -- e.g. all
partially depend on irqs being disabled at various points. Secondly,
the core of the allocator deals with single pages where as both the bulk
allocator and per-cpu allocator operate in batches. All of that has to
be reconciled with all the existing users and their constraints (memory
offline, CMA and cpusets being the trickiest).

In terms of semantics required by new users, my preference is that a pair
of patches be applied -- the first which adds the required semantic to
the bulk allocator and the second which adds the new user.

Patch 1 of this series is a cleanup to sunrpc, it could be merged
	separately but is included here for convenience.

Patch 2 is the prototype bulk allocator

Patch 3 is the sunrpc user. Chuck also has a patch which further caches
	pages but is not included in this series. It's not directly
	related to the bulk allocator and as it caches pages, it might
	have other concerns (e.g. does it need a shrinker?)

This has only been lightly tested on a low-end NFS server. It did not break
but would benefit from an evaluation to see how much, if any, the headline
performance changes. The biggest concern is that a light test case showed
that there are a *lot* of bulk requests for 1 page which gets delegated to
the normal allocator.  The same criteria should apply to any other users.

 include/linux/gfp.h   |  13 +++++
 mm/page_alloc.c       | 113 +++++++++++++++++++++++++++++++++++++++++-
 net/sunrpc/svc_xprt.c |  47 ++++++++++++------
 3 files changed, 157 insertions(+), 16 deletions(-)

Comments

Jesper Dangaard Brouer Feb. 24, 2021, 11:27 a.m. UTC | #1
On Wed, 24 Feb 2021 10:26:00 +0000
Mel Gorman <mgorman@techsingularity.net> wrote:

> This is a prototype series that introduces a bulk order-0 page allocator
> with sunrpc being the first user. The implementation is not particularly
> efficient and the intention is to iron out what the semantics of the API
> should be. That said, sunrpc was reported to have reduced allocation
> latency when refilling a pool.

I also have a use-case in page_pool, and I've been testing with the
earlier patches, results are here[1]

[1] https://github.com/xdp-project/xdp-project/blob/master/areas/mem/page_pool06_alloc_pages_bulk.org

Awesome to see this newer patchset! thanks a lot for working on this!
I'll run some new tests based on this.

> As a side-note, while the implementation could be more efficient, it
> would require fairly deep surgery in numerous places. The lock scope would
> need to be significantly reduced, particularly as vmstat, per-cpu and the
> buddy allocator have different locking protocol that overal -- e.g. all
> partially depend on irqs being disabled at various points. Secondly,
> the core of the allocator deals with single pages where as both the bulk
> allocator and per-cpu allocator operate in batches. All of that has to
> be reconciled with all the existing users and their constraints (memory
> offline, CMA and cpusets being the trickiest).

As you can see in[1], I'm getting a significant speedup from this.  I
guess that the cost of finding the "zone" is higher than I expected, as
this basically what we/you amortize for the bulk.

 
> In terms of semantics required by new users, my preference is that a pair
> of patches be applied -- the first which adds the required semantic to
> the bulk allocator and the second which adds the new user.
> 
> Patch 1 of this series is a cleanup to sunrpc, it could be merged
> 	separately but is included here for convenience.
> 
> Patch 2 is the prototype bulk allocator
> 
> Patch 3 is the sunrpc user. Chuck also has a patch which further caches
> 	pages but is not included in this series. It's not directly
> 	related to the bulk allocator and as it caches pages, it might
> 	have other concerns (e.g. does it need a shrinker?)
> 
> This has only been lightly tested on a low-end NFS server. It did not break
> but would benefit from an evaluation to see how much, if any, the headline
> performance changes. The biggest concern is that a light test case showed
> that there are a *lot* of bulk requests for 1 page which gets delegated to
> the normal allocator.  The same criteria should apply to any other users.

If you change local_irq_save(flags) to local_irq_disable() then you can
likely get better performance for 1 page requests via this API.  This
limits the API to be used in cases where IRQs are enabled (which is
most cases).  (For my use-case I will not do 1 page requests).
Mel Gorman Feb. 24, 2021, 11:55 a.m. UTC | #2
On Wed, Feb 24, 2021 at 12:27:23PM +0100, Jesper Dangaard Brouer wrote:
> On Wed, 24 Feb 2021 10:26:00 +0000
> Mel Gorman <mgorman@techsingularity.net> wrote:
> 
> > This is a prototype series that introduces a bulk order-0 page allocator
> > with sunrpc being the first user. The implementation is not particularly
> > efficient and the intention is to iron out what the semantics of the API
> > should be. That said, sunrpc was reported to have reduced allocation
> > latency when refilling a pool.
> 
> I also have a use-case in page_pool, and I've been testing with the
> earlier patches, results are here[1]
> 
> [1] https://github.com/xdp-project/xdp-project/blob/master/areas/mem/page_pool06_alloc_pages_bulk.org
> 
> Awesome to see this newer patchset! thanks a lot for working on this!
> I'll run some new tests based on this.
> 

Thanks and if they get finalised, a patch on top for review would be
nice with the results included in the changelog. Obviously any change
that would need to be made to the allocator would happen first.

> > As a side-note, while the implementation could be more efficient, it
> > would require fairly deep surgery in numerous places. The lock scope would
> > need to be significantly reduced, particularly as vmstat, per-cpu and the
> > buddy allocator have different locking protocol that overal -- e.g. all
> > partially depend on irqs being disabled at various points. Secondly,
> > the core of the allocator deals with single pages where as both the bulk
> > allocator and per-cpu allocator operate in batches. All of that has to
> > be reconciled with all the existing users and their constraints (memory
> > offline, CMA and cpusets being the trickiest).
> 
> As you can see in[1], I'm getting a significant speedup from this.  I
> guess that the cost of finding the "zone" is higher than I expected, as
> this basically what we/you amortize for the bulk.
> 

The obvious goal would be that if a refactoring did happen that the
performance would be at least neutral but hopefully improved.

> > In terms of semantics required by new users, my preference is that a pair
> > of patches be applied -- the first which adds the required semantic to
> > the bulk allocator and the second which adds the new user.
> > 
> > Patch 1 of this series is a cleanup to sunrpc, it could be merged
> > 	separately but is included here for convenience.
> > 
> > Patch 2 is the prototype bulk allocator
> > 
> > Patch 3 is the sunrpc user. Chuck also has a patch which further caches
> > 	pages but is not included in this series. It's not directly
> > 	related to the bulk allocator and as it caches pages, it might
> > 	have other concerns (e.g. does it need a shrinker?)
> > 
> > This has only been lightly tested on a low-end NFS server. It did not break
> > but would benefit from an evaluation to see how much, if any, the headline
> > performance changes. The biggest concern is that a light test case showed
> > that there are a *lot* of bulk requests for 1 page which gets delegated to
> > the normal allocator.  The same criteria should apply to any other users.
> 
> If you change local_irq_save(flags) to local_irq_disable() then you can
> likely get better performance for 1 page requests via this API.  This
> limits the API to be used in cases where IRQs are enabled (which is
> most cases).  (For my use-case I will not do 1 page requests).
> 

I do not want to constrain the API to being IRQ-only prematurely. An
obvious alternative use case is the SLUB allocation path when a high-order
allocation fails. It's known that if the SLUB order is reduced that it has
an impact on hackbench communicating over sockets.  It would be interesting
to see what happens if order-0 pages are bulk allocated when s->min == 0
and that can be called from a blocking context. Tricky to test but could
be fudged by forcing all high-order allocations to fail when s->min ==
0 to evaluate the worst case scenario.  In addition, it would constrain
any potential refactoring if the lower levels have to choose between
local_irq_disable() vs local_irq_save() depending on the caller context.
Chuck Lever Feb. 24, 2021, 1:20 p.m. UTC | #3
> On Feb 24, 2021, at 5:26 AM, Mel Gorman <mgorman@techsingularity.net> wrote:
> 
> This is a prototype series that introduces a bulk order-0 page allocator
> with sunrpc being the first user. The implementation is not particularly
> efficient and the intention is to iron out what the semantics of the API
> should be. That said, sunrpc was reported to have reduced allocation
> latency when refilling a pool.
> 
> As a side-note, while the implementation could be more efficient, it
> would require fairly deep surgery in numerous places. The lock scope would
> need to be significantly reduced, particularly as vmstat, per-cpu and the
> buddy allocator have different locking protocol that overal -- e.g. all
> partially depend on irqs being disabled at various points. Secondly,
> the core of the allocator deals with single pages where as both the bulk
> allocator and per-cpu allocator operate in batches. All of that has to
> be reconciled with all the existing users and their constraints (memory
> offline, CMA and cpusets being the trickiest).
> 
> In terms of semantics required by new users, my preference is that a pair
> of patches be applied -- the first which adds the required semantic to
> the bulk allocator and the second which adds the new user.
> 
> Patch 1 of this series is a cleanup to sunrpc, it could be merged
> 	separately but is included here for convenience.
> 
> Patch 2 is the prototype bulk allocator
> 
> Patch 3 is the sunrpc user. Chuck also has a patch which further caches
> 	pages but is not included in this series. It's not directly
> 	related to the bulk allocator and as it caches pages, it might
> 	have other concerns (e.g. does it need a shrinker?)
> 
> This has only been lightly tested on a low-end NFS server. It did not break
> but would benefit from an evaluation to see how much, if any, the headline
> performance changes. The biggest concern is that a light test case showed
> that there are a *lot* of bulk requests for 1 page which gets delegated to
> the normal allocator.  The same criteria should apply to any other users.
> 
> include/linux/gfp.h   |  13 +++++
> mm/page_alloc.c       | 113 +++++++++++++++++++++++++++++++++++++++++-
> net/sunrpc/svc_xprt.c |  47 ++++++++++++------
> 3 files changed, 157 insertions(+), 16 deletions(-)

Hi Mel-

Thank you for carrying the torch!


--
Chuck Lever
Jesper Dangaard Brouer Feb. 24, 2021, 6:56 p.m. UTC | #4
This is a followup to Mel Gorman's patchset:
 - Message-Id: <20210224102603.19524-1-mgorman@techsingularity.net>
 - https://lore.kernel.org/netdev/20210224102603.19524-1-mgorman@techsingularity.net/

Showing page_pool usage of the API for alloc_pages_bulk().

---

Jesper Dangaard Brouer (3):
      net: page_pool: refactor dma_map into own function page_pool_dma_map
      net: page_pool: use alloc_pages_bulk in refill code path
      mm: make zone->free_area[order] access faster


 include/linux/mmzone.h |    6 ++-
 net/core/page_pool.c   |   96 +++++++++++++++++++++++++++++++-----------------
 2 files changed, 65 insertions(+), 37 deletions(-)

--
Jesper Dangaard Brouer March 1, 2021, 1:29 p.m. UTC | #5
This is a followup to Mel Gorman's patchset:
 - Message-Id: <20210224102603.19524-1-mgorman@techsingularity.net>
 - https://lore.kernel.org/netdev/20210224102603.19524-1-mgorman@techsingularity.net/

Showing page_pool usage of the API for alloc_pages_bulk().

Maybe Mel Gorman will/can carry these patches?
(to keep it together with the alloc_pages_bulk API)

---

Jesper Dangaard Brouer (2):
      net: page_pool: refactor dma_map into own function page_pool_dma_map
      net: page_pool: use alloc_pages_bulk in refill code path


 net/core/page_pool.c |  102 +++++++++++++++++++++++++++++++-------------------
 1 file changed, 63 insertions(+), 39 deletions(-)

--