Message ID | 1683355547-10524-1-git-send-email-zhaoyang.huang@unisoc.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [PATCHv3] mm: optimization on page allocation when CMA enabled | expand |
On Sat, May 06, 2023 at 02:45:47PM +0800, zhaoyang.huang wrote: > From: Zhaoyang Huang <zhaoyang.huang@unisoc.com> > > Let us look at the series of scenarios below with WMARK_LOW=25MB,WMARK_MIN=5MB > (managed pages 1.9GB). We can know that current 'fixed 1/2 ratio' start to use > CMA since C which actually has caused U&R lower than WMARK_LOW (this should be > deemed as against current memory policy, that is, UNMOVABLE & RECLAIMABLE should > either stay around WATERMARK_LOW when no allocation or do reclaim via entering > slowpath) > > -- Free_pages > | > | > -- WMARK_LOW > | > -- Free_CMA > | > | > -- > > Free_CMA/Free_pages(MB) A(12/30) B(12/25) C(12/20) > fixed 1/2 ratio N N Y > this commit Y Y Y > > Signed-off-by: Zhaoyang Huang <zhaoyang.huang@unisoc.com> > --- > v2: do proportion check when zone_watermark_ok, update commit message > v3: update coding style and simplify the logic when zone_watermark_ok We're getting closer, thank you for working on it. > --- > --- > mm/page_alloc.c | 46 ++++++++++++++++++++++++++++++++++++++++------ > 1 file changed, 40 insertions(+), 6 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 0745aed..7aca49d 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -3071,6 +3071,41 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac, > > } > > +#ifdef CONFIG_CMA > +/* > + * GFP_MOVABLE allocation could drain UNMOVABLE & RECLAIMABLE page blocks via > + * the help of CMA which makes GFP_KERNEL failed. Checking if zone_watermark_ok > + * again without ALLOC_CMA to see if to use CMA first. > + */ > +static bool __if_use_cma_first(struct zone *zone, unsigned int order, unsigned int alloc_flags) Can you, please, rename it to use_cma_first()? > +{ > + unsigned long watermark; > + bool cma_first = false; > + > + watermark = wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK); > + /* check if GFP_MOVABLE pass previous zone_watermark_ok via the help of CMA */ > + if (!zone_watermark_ok(zone, order, watermark, 0, alloc_flags & (~ALLOC_CMA))) Please, add {} for both "if" and "else" parts. Also, please, invert the order to make it easier to follow: if (zone_watermark_ok(...)) { ... } else { ... } > + /* > + * watermark failed means UNMOVABLE & RECLAIMBLE is not enough > + * now, we should use cma first to keep them stay around the > + * corresponding watermark > + */ > + cma_first = true; > + else > + /* > + * remain previous fixed 1/2 logic when watermark ok as we have Nobody knows what was previously here once your change is applied. Please, do not refer to the previous state, add a comment about the current state. You can probably (partially) move the comment from __rmqueue(). > + * above protection now > + */ > + cma_first = (zone_page_state(zone, NR_FREE_CMA_PAGES) > > + zone_page_state(zone, NR_FREE_PAGES) / 2); > + return cma_first; > +} > +#else > +static bool __if_use_cma_first(struct zone *zone, unsigned int order, unsigned int alloc_flags) > +{ > + return false; > +} > +#endif > /* > * Do the hard work of removing an element from the buddy allocator. > * Call me with the zone->lock already held. > @@ -3084,13 +3119,12 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac, > if (IS_ENABLED(CONFIG_CMA)) { > /* > * Balance movable allocations between regular and CMA areas by > - * allocating from CMA when over half of the zone's free memory > - * is in the CMA area. > + * allocating from CMA base on judging zone_watermark_ok again > + * to see if the latest check got pass via the help of CMA > */ > - if (alloc_flags & ALLOC_CMA && > - zone_page_state(zone, NR_FREE_CMA_PAGES) > > - zone_page_state(zone, NR_FREE_PAGES) / 2) { > - page = __rmqueue_cma_fallback(zone, order); > + if (migratetype == MIGRATE_MOVABLE) { > + page = __if_use_cma_first(zone, order, alloc_flags) ? > + __rmqueue_cma_fallback(zone, order) : NULL; Can you put it like if (migratetype == MIGRATE_MOVABLE && use_cma_first(...)) { page = ... if (page) return page; } to avoid using a ternary operator without a good reason? Thanks!
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0745aed..7aca49d 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3071,6 +3071,41 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac, } +#ifdef CONFIG_CMA +/* + * GFP_MOVABLE allocation could drain UNMOVABLE & RECLAIMABLE page blocks via + * the help of CMA which makes GFP_KERNEL failed. Checking if zone_watermark_ok + * again without ALLOC_CMA to see if to use CMA first. + */ +static bool __if_use_cma_first(struct zone *zone, unsigned int order, unsigned int alloc_flags) +{ + unsigned long watermark; + bool cma_first = false; + + watermark = wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK); + /* check if GFP_MOVABLE pass previous zone_watermark_ok via the help of CMA */ + if (!zone_watermark_ok(zone, order, watermark, 0, alloc_flags & (~ALLOC_CMA))) + /* + * watermark failed means UNMOVABLE & RECLAIMBLE is not enough + * now, we should use cma first to keep them stay around the + * corresponding watermark + */ + cma_first = true; + else + /* + * remain previous fixed 1/2 logic when watermark ok as we have + * above protection now + */ + cma_first = (zone_page_state(zone, NR_FREE_CMA_PAGES) > + zone_page_state(zone, NR_FREE_PAGES) / 2); + return cma_first; +} +#else +static bool __if_use_cma_first(struct zone *zone, unsigned int order, unsigned int alloc_flags) +{ + return false; +} +#endif /* * Do the hard work of removing an element from the buddy allocator. * Call me with the zone->lock already held. @@ -3084,13 +3119,12 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac, if (IS_ENABLED(CONFIG_CMA)) { /* * Balance movable allocations between regular and CMA areas by - * allocating from CMA when over half of the zone's free memory - * is in the CMA area. + * allocating from CMA base on judging zone_watermark_ok again + * to see if the latest check got pass via the help of CMA */ - if (alloc_flags & ALLOC_CMA && - zone_page_state(zone, NR_FREE_CMA_PAGES) > - zone_page_state(zone, NR_FREE_PAGES) / 2) { - page = __rmqueue_cma_fallback(zone, order); + if (migratetype == MIGRATE_MOVABLE) { + page = __if_use_cma_first(zone, order, alloc_flags) ? + __rmqueue_cma_fallback(zone, order) : NULL; if (page) return page; }