Message ID | 20230802093741.2333325-3-shikemeng@huaweicloud.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Fixes and cleanups to compaction | expand |
On 8/2/2023 5:37 PM, Kemeng Shi wrote: > We record start pfn of last isolated page block with last_migrated_pfn. And > then: > 1. We check if we mark the page block skip for exclusive access in > isolate_migratepages_block by test if next migrate pfn is still in last > isolated page block. If so, we will set finish_pageblock to do the rescan. > 2. We check if a full cc->order block is scanned by test if last scan range > passes the cc->order block boundary. If so, we flush the pages were freed. > > We treat cc->migrate_pfn before isolate_migratepages as the start pfn of > last isolated page range. However, we always align migrate_pfn to page block > or move to another page block in fast_find_migrateblock or in linearly scan > forward in isolate_migratepages before do page isolation in > isolate_migratepages_block. > > Update last_migrated_pfn with pageblock_start_pfn(cc->migrate_pfn - 1) > after scan to correctly set start pfn of last isolated page range. To > avoid that: > 1. Miss a rescan with finish_pageblock set as last_migrate_pfn does not > point to right pageblock and the migrate will not be in pageblock of > last_migrate_pfn as it should be. > 2. Wrongly issue flush by test cc->order block boundary with wrong > last_migrate_pfn. > > Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com> LGTM. Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> > --- > mm/compaction.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/mm/compaction.c b/mm/compaction.c > index a8cea916df9d..ec3a96b7afce 100644 > --- a/mm/compaction.c > +++ b/mm/compaction.c > @@ -2487,7 +2487,8 @@ compact_zone(struct compact_control *cc, struct capture_control *capc) > goto check_drain; > case ISOLATE_SUCCESS: > update_cached = false; > - last_migrated_pfn = iteration_start_pfn; > + last_migrated_pfn = max(cc->zone->zone_start_pfn, > + pageblock_start_pfn(cc->migrate_pfn - 1)); > } > > err = migrate_pages(&cc->migratepages, compaction_alloc,
diff --git a/mm/compaction.c b/mm/compaction.c index a8cea916df9d..ec3a96b7afce 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -2487,7 +2487,8 @@ compact_zone(struct compact_control *cc, struct capture_control *capc) goto check_drain; case ISOLATE_SUCCESS: update_cached = false; - last_migrated_pfn = iteration_start_pfn; + last_migrated_pfn = max(cc->zone->zone_start_pfn, + pageblock_start_pfn(cc->migrate_pfn - 1)); } err = migrate_pages(&cc->migratepages, compaction_alloc,
We record start pfn of last isolated page block with last_migrated_pfn. And then: 1. We check if we mark the page block skip for exclusive access in isolate_migratepages_block by test if next migrate pfn is still in last isolated page block. If so, we will set finish_pageblock to do the rescan. 2. We check if a full cc->order block is scanned by test if last scan range passes the cc->order block boundary. If so, we flush the pages were freed. We treat cc->migrate_pfn before isolate_migratepages as the start pfn of last isolated page range. However, we always align migrate_pfn to page block or move to another page block in fast_find_migrateblock or in linearly scan forward in isolate_migratepages before do page isolation in isolate_migratepages_block. Update last_migrated_pfn with pageblock_start_pfn(cc->migrate_pfn - 1) after scan to correctly set start pfn of last isolated page range. To avoid that: 1. Miss a rescan with finish_pageblock set as last_migrate_pfn does not point to right pageblock and the migrate will not be in pageblock of last_migrate_pfn as it should be. 2. Wrongly issue flush by test cc->order block boundary with wrong last_migrate_pfn. Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com> --- mm/compaction.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)