Message ID | 20231013085603.1227349-10-wangkefeng.wang@huawei.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm: convert page cpupid functions to folios | expand |
On Fri, Oct 13, 2023 at 04:55:53PM +0800, Kefeng Wang wrote: > Use a folio in change_pte_range() to save three compound_head() calls. Yes, but here we have a change of behaviour, which should be argued is desirable. Before if a partial THP was mapped, or a fs large folio, we would do this to individual pages. Now we're doing it to the entire folio. Is that desirable? I don't have the background to argue either way. > @@ -157,7 +159,7 @@ static long change_pte_range(struct mmu_gather *tlb, > continue; > if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING && > !toptier) > - xchg_page_access_time(page, > + folio_xchg_access_time(folio, > jiffies_to_msecs(jiffies)); > }
On 2023/10/13 23:13, Matthew Wilcox wrote: > On Fri, Oct 13, 2023 at 04:55:53PM +0800, Kefeng Wang wrote: >> Use a folio in change_pte_range() to save three compound_head() calls. > > Yes, but here we have a change of behaviour, which should be argued > is desirable. Before if a partial THP was mapped, or a fs large > folio, we would do this to individual pages. Now we're doing it to the > entire folio. Is that desirable? I don't have the background to argue > either way. The Huang's replay in v1[1] already mentioned this, we only use last_cpupid from head page, and large folio won't be handled from do_numa_page(), and if large folio numa balancing is supported, we could migrate the entire large folio mapped only one process, or maybe split the large folio mapped multi-processes, and when split it, we will copy the last_cpupid from head to the tail page. Anyway, I think this change or the wp_page_reuse() won't break current numa balancing. Thanks. [1]https://lore.kernel.org/linux-mm/874jixhfeu.fsf@yhuang6-desk2.ccr.corp.intel.com/ > >> @@ -157,7 +159,7 @@ static long change_pte_range(struct mmu_gather *tlb, >> continue; >> if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING && >> !toptier) >> - xchg_page_access_time(page, >> + folio_xchg_access_time(folio, >> jiffies_to_msecs(jiffies)); >> } >
diff --git a/mm/mprotect.c b/mm/mprotect.c index f1dc8f8c84ef..81991102f785 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -114,7 +114,7 @@ static long change_pte_range(struct mmu_gather *tlb, * pages. See similar comment in change_huge_pmd. */ if (prot_numa) { - struct page *page; + struct folio *folio; int nid; bool toptier; @@ -122,13 +122,14 @@ static long change_pte_range(struct mmu_gather *tlb, if (pte_protnone(oldpte)) continue; - page = vm_normal_page(vma, addr, oldpte); - if (!page || is_zone_device_page(page) || PageKsm(page)) + folio = vm_normal_folio(vma, addr, oldpte); + if (!folio || folio_is_zone_device(folio) || + folio_test_ksm(folio)) continue; /* Also skip shared copy-on-write pages */ if (is_cow_mapping(vma->vm_flags) && - page_count(page) != 1) + folio_ref_count(folio) != 1) continue; /* @@ -136,14 +137,15 @@ static long change_pte_range(struct mmu_gather *tlb, * it cannot move them all from MIGRATE_ASYNC * context. */ - if (page_is_file_lru(page) && PageDirty(page)) + if (folio_is_file_lru(folio) && + folio_test_dirty(folio)) continue; /* * Don't mess with PTEs if page is already on the node * a single-threaded process is running on. */ - nid = page_to_nid(page); + nid = folio_nid(folio); if (target_node == nid) continue; toptier = node_is_toptier(nid); @@ -157,7 +159,7 @@ static long change_pte_range(struct mmu_gather *tlb, continue; if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING && !toptier) - xchg_page_access_time(page, + folio_xchg_access_time(folio, jiffies_to_msecs(jiffies)); }
Use a folio in change_pte_range() to save three compound_head() calls. Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> --- mm/mprotect.c | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-)