Message ID | 20221006101540.40686-1-laoar.shao@gmail.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm/page_alloc: Fix incorrect PGFREE and PGALLOC for high-order page | expand |
On Thu, Oct 06, 2022 at 10:15:40AM +0000, Yafang Shao wrote: > PGFREE and PGALLOC represent the number of freed and allocated pages. > So the page order must be considered. > > Fixes: 44042b449872 ("mm/page_alloc: allow high-order pages to be stored on the per-cpu lists") > Signed-off-by: Yafang Shao <laoar.shao@gmail.com> > Cc: Mel Gorman <mgorman@techsingularity.net> Acked-by: Mel Gorman <mgorman@techsingularity.net>
On 2022/10/6 18:15, Yafang Shao wrote: > PGFREE and PGALLOC represent the number of freed and allocated pages. > So the page order must be considered. > > Fixes: 44042b449872 ("mm/page_alloc: allow high-order pages to be stored on the per-cpu lists") > Signed-off-by: Yafang Shao <laoar.shao@gmail.com> LGTM. Thanks for fixing. Reviewed-by: Miaohe Lin <linmiaohe@huawei.com> Thanks, Miaohe Lin > Cc: Mel Gorman <mgorman@techsingularity.net> > --- > mm/page_alloc.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index e5486d4..3c0ee3b 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -3440,7 +3440,7 @@ static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp, > int pindex; > bool free_high; > > - __count_vm_event(PGFREE); > + __count_vm_events(PGFREE, 1 << order); > pindex = order_to_pindex(migratetype, order); > list_add(&page->pcp_list, &pcp->lists[pindex]); > pcp->count += 1 << order; > @@ -3808,7 +3808,7 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone, > pcp_spin_unlock_irqrestore(pcp, flags); > pcp_trylock_finish(UP_flags); > if (page) { > - __count_zid_vm_events(PGALLOC, page_zonenum(page), 1); > + __count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order); > zone_statistics(preferred_zone, zone, 1); > } > return page; >
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index e5486d4..3c0ee3b 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3440,7 +3440,7 @@ static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp, int pindex; bool free_high; - __count_vm_event(PGFREE); + __count_vm_events(PGFREE, 1 << order); pindex = order_to_pindex(migratetype, order); list_add(&page->pcp_list, &pcp->lists[pindex]); pcp->count += 1 << order; @@ -3808,7 +3808,7 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone, pcp_spin_unlock_irqrestore(pcp, flags); pcp_trylock_finish(UP_flags); if (page) { - __count_zid_vm_events(PGALLOC, page_zonenum(page), 1); + __count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order); zone_statistics(preferred_zone, zone, 1); } return page;
PGFREE and PGALLOC represent the number of freed and allocated pages. So the page order must be considered. Fixes: 44042b449872 ("mm/page_alloc: allow high-order pages to be stored on the per-cpu lists") Signed-off-by: Yafang Shao <laoar.shao@gmail.com> Cc: Mel Gorman <mgorman@techsingularity.net> --- mm/page_alloc.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)