Message ID | 20210414133931.4555-8-mgorman@techsingularity.net (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Use local_lock for pcp protection and reduce stat overhead | expand |
On 4/14/21 3:39 PM, Mel Gorman wrote: > Both free_pcppages_bulk() and free_one_page() have very similar > checks about whether a page's migratetype has changed under the > zone lock. Use a common helper. > > Signed-off-by: Mel Gorman <mgorman@techsingularity.net> Seems like for free_pcppages_bulk() this patch makes it check for each page on the pcplist - zone->nr_isolate_pageblock != 0 instead of local bool (the performance might be the same I guess on modern cpu though) - is_migrate_isolate(migratetype) for a migratetype obtained by get_pcppage_migratetype() which cannot be migrate_isolate so the check is useless. As such it doesn't seem a worthwhile cleanup to me considering all the other microoptimisations? > --- > mm/page_alloc.c | 32 ++++++++++++++++++++++---------- > 1 file changed, 22 insertions(+), 10 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 295624fe293b..1ed370668e7f 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -1354,6 +1354,23 @@ static inline void prefetch_buddy(struct page *page) > prefetch(buddy); > } > > +/* > + * The migratetype of a page may have changed due to isolation so check. > + * Assumes the caller holds the zone->lock to serialise against page > + * isolation. > + */ > +static inline int > +check_migratetype_isolated(struct zone *zone, struct page *page, unsigned long pfn, int migratetype) > +{ > + /* If isolating, check if the migratetype has changed */ > + if (unlikely(has_isolate_pageblock(zone) || > + is_migrate_isolate(migratetype))) { > + migratetype = get_pfnblock_migratetype(page, pfn); > + } > + > + return migratetype; > +} > + > /* > * Frees a number of pages from the PCP lists > * Assumes all pages on list are in same zone, and of same order. > @@ -1371,7 +1388,6 @@ static void free_pcppages_bulk(struct zone *zone, int count, > int migratetype = 0; > int batch_free = 0; > int prefetch_nr = READ_ONCE(pcp->batch); > - bool isolated_pageblocks; > struct page *page, *tmp; > LIST_HEAD(head); > > @@ -1433,21 +1449,20 @@ static void free_pcppages_bulk(struct zone *zone, int count, > * both PREEMPT_RT and non-PREEMPT_RT configurations. > */ > spin_lock(&zone->lock); > - isolated_pageblocks = has_isolate_pageblock(zone); > > /* > * Use safe version since after __free_one_page(), > * page->lru.next will not point to original list. > */ > list_for_each_entry_safe(page, tmp, &head, lru) { > + unsigned long pfn = page_to_pfn(page); > int mt = get_pcppage_migratetype(page); > + > /* MIGRATE_ISOLATE page should not go to pcplists */ > VM_BUG_ON_PAGE(is_migrate_isolate(mt), page); > - /* Pageblock could have been isolated meanwhile */ > - if (unlikely(isolated_pageblocks)) > - mt = get_pageblock_migratetype(page); > > - __free_one_page(page, page_to_pfn(page), zone, 0, mt, FPI_NONE); > + mt = check_migratetype_isolated(zone, page, pfn, mt); > + __free_one_page(page, pfn, zone, 0, mt, FPI_NONE); > trace_mm_page_pcpu_drain(page, 0, mt); > } > spin_unlock(&zone->lock); > @@ -1459,10 +1474,7 @@ static void free_one_page(struct zone *zone, > int migratetype, fpi_t fpi_flags) > { > spin_lock(&zone->lock); > - if (unlikely(has_isolate_pageblock(zone) || > - is_migrate_isolate(migratetype))) { > - migratetype = get_pfnblock_migratetype(page, pfn); > - } > + migratetype = check_migratetype_isolated(zone, page, pfn, migratetype); > __free_one_page(page, pfn, zone, order, migratetype, fpi_flags); > spin_unlock(&zone->lock); > } >
On Wed, Apr 14, 2021 at 07:21:42PM +0200, Vlastimil Babka wrote: > On 4/14/21 3:39 PM, Mel Gorman wrote: > > Both free_pcppages_bulk() and free_one_page() have very similar > > checks about whether a page's migratetype has changed under the > > zone lock. Use a common helper. > > > > Signed-off-by: Mel Gorman <mgorman@techsingularity.net> > > Seems like for free_pcppages_bulk() this patch makes it check for each page on > the pcplist > - zone->nr_isolate_pageblock != 0 instead of local bool (the performance might > be the same I guess on modern cpu though) > - is_migrate_isolate(migratetype) for a migratetype obtained by > get_pcppage_migratetype() which cannot be migrate_isolate so the check is useless. > > As such it doesn't seem a worthwhile cleanup to me considering all the other > microoptimisations? > The patch was a preparation patch for the rest of the series to avoid code duplication and to consolidate checks together in one place to determine if they are even correct. Until zone_pcp_disable() came along, it was possible to have isolated PCP pages in the lists even though zone->nr_isolate_pageblock could be 0 during memory hot-remove so the split in free_pcppages_bulk was not necessarily correct at all times. The remaining problem is alloc_contig_pages, it does not disable PCPs so both checks are necessary. If that also disabled PCPs then check_migratetype_isolated could be deleted but the cost to alloc_contig_pages might be too high. I'll delete this patch for now because it's relatively minor and there should be other ways of keeping the code duplication down.
On 4/15/21 11:33 AM, Mel Gorman wrote: > On Wed, Apr 14, 2021 at 07:21:42PM +0200, Vlastimil Babka wrote: >> On 4/14/21 3:39 PM, Mel Gorman wrote: >> > Both free_pcppages_bulk() and free_one_page() have very similar >> > checks about whether a page's migratetype has changed under the >> > zone lock. Use a common helper. >> > >> > Signed-off-by: Mel Gorman <mgorman@techsingularity.net> >> >> Seems like for free_pcppages_bulk() this patch makes it check for each page on >> the pcplist >> - zone->nr_isolate_pageblock != 0 instead of local bool (the performance might >> be the same I guess on modern cpu though) >> - is_migrate_isolate(migratetype) for a migratetype obtained by >> get_pcppage_migratetype() which cannot be migrate_isolate so the check is useless. >> >> As such it doesn't seem a worthwhile cleanup to me considering all the other >> microoptimisations? >> > > The patch was a preparation patch for the rest of the series to avoid code > duplication and to consolidate checks together in one place to determine > if they are even correct. > > Until zone_pcp_disable() came along, it was possible to have isolated PCP > pages in the lists even though zone->nr_isolate_pageblock could be 0 during > memory hot-remove so the split in free_pcppages_bulk was not necessarily > correct at all times. > > The remaining problem is alloc_contig_pages, it does not disable > PCPs so both checks are necessary. If that also disabled PCPs > then check_migratetype_isolated could be deleted but the cost to > alloc_contig_pages might be too high. I see. Well that's unfortunate if checking zone->nr_isolate_pageblock is not sufficient, as it was supposed to be :( But I don't think the check_migratetype_isolated() check was helping in such scenario as it was, anyway. It's testing this: + if (unlikely(has_isolate_pageblock(zone) || + is_migrate_isolate(migratetype))) { In the context of free_one_page(), the 'migratetype' variable holds a value that's read from pageblock in one of the callers of free_one_page(): migratetype = get_pcppage_migratetype(page); and because it's read outside of zone lock, it might be a MIGRATE_ISOLATE even though after we obtain the zone lock, we might find out it's not anymore. This is explained in commit ad53f92eb416 ("mm/page_alloc: fix incorrect isolation behavior by rechecking migratetype") as scenario 1. However, in the context of free_pcppages_bulk(), the migratetype we are checking in check_migratetype_isolated() is this one: int mt = get_pcppage_migratetype(page); That was the one determined while adding the page to pcplist, and is stored in the struct page and we know it's not MIGRATE_ISOLATE otherwise the page would not go to pcplist. But by rechecking this stored value, we would not be finding the case where the underlying pageblock's migratetype would change to MIGRATE_ISOLATE, anyway... > I'll delete this patch for now because it's relatively minor and there > should be other ways of keeping the code duplication down. >
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 295624fe293b..1ed370668e7f 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1354,6 +1354,23 @@ static inline void prefetch_buddy(struct page *page) prefetch(buddy); } +/* + * The migratetype of a page may have changed due to isolation so check. + * Assumes the caller holds the zone->lock to serialise against page + * isolation. + */ +static inline int +check_migratetype_isolated(struct zone *zone, struct page *page, unsigned long pfn, int migratetype) +{ + /* If isolating, check if the migratetype has changed */ + if (unlikely(has_isolate_pageblock(zone) || + is_migrate_isolate(migratetype))) { + migratetype = get_pfnblock_migratetype(page, pfn); + } + + return migratetype; +} + /* * Frees a number of pages from the PCP lists * Assumes all pages on list are in same zone, and of same order. @@ -1371,7 +1388,6 @@ static void free_pcppages_bulk(struct zone *zone, int count, int migratetype = 0; int batch_free = 0; int prefetch_nr = READ_ONCE(pcp->batch); - bool isolated_pageblocks; struct page *page, *tmp; LIST_HEAD(head); @@ -1433,21 +1449,20 @@ static void free_pcppages_bulk(struct zone *zone, int count, * both PREEMPT_RT and non-PREEMPT_RT configurations. */ spin_lock(&zone->lock); - isolated_pageblocks = has_isolate_pageblock(zone); /* * Use safe version since after __free_one_page(), * page->lru.next will not point to original list. */ list_for_each_entry_safe(page, tmp, &head, lru) { + unsigned long pfn = page_to_pfn(page); int mt = get_pcppage_migratetype(page); + /* MIGRATE_ISOLATE page should not go to pcplists */ VM_BUG_ON_PAGE(is_migrate_isolate(mt), page); - /* Pageblock could have been isolated meanwhile */ - if (unlikely(isolated_pageblocks)) - mt = get_pageblock_migratetype(page); - __free_one_page(page, page_to_pfn(page), zone, 0, mt, FPI_NONE); + mt = check_migratetype_isolated(zone, page, pfn, mt); + __free_one_page(page, pfn, zone, 0, mt, FPI_NONE); trace_mm_page_pcpu_drain(page, 0, mt); } spin_unlock(&zone->lock); @@ -1459,10 +1474,7 @@ static void free_one_page(struct zone *zone, int migratetype, fpi_t fpi_flags) { spin_lock(&zone->lock); - if (unlikely(has_isolate_pageblock(zone) || - is_migrate_isolate(migratetype))) { - migratetype = get_pfnblock_migratetype(page, pfn); - } + migratetype = check_migratetype_isolated(zone, page, pfn, migratetype); __free_one_page(page, pfn, zone, order, migratetype, fpi_flags); spin_unlock(&zone->lock); }
Both free_pcppages_bulk() and free_one_page() have very similar checks about whether a page's migratetype has changed under the zone lock. Use a common helper. Signed-off-by: Mel Gorman <mgorman@techsingularity.net> --- mm/page_alloc.c | 32 ++++++++++++++++++++++---------- 1 file changed, 22 insertions(+), 10 deletions(-)