Message ID | 20220203171904.609984-2-willy@infradead.org (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [1/2] mm: Add pvmw_set_page() | expand |
Hi "Matthew, Thank you for the patch! Yet something to improve: [auto build test ERROR on linus/master] [also build test ERROR on tip/perf/core v5.17-rc2 next-20220203] [cannot apply to hnaz-mm/master] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/0day-ci/linux/commits/Matthew-Wilcox-Oracle/mm-Add-pvmw_set_page/20220204-012111 base: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git c36c04c2e132fc39f6b658bf607aed4425427fd7 config: riscv-randconfig-r042-20220131 (https://download.01.org/0day-ci/archive/20220204/202202040625.ZAq9Op2f-lkp@intel.com/config) compiler: clang version 15.0.0 (https://github.com/llvm/llvm-project a73e4ce6a59b01f0e37037761c1e6889d539d233) reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # install riscv cross compiling tool for clang build # apt-get install binutils-riscv64-linux-gnu # https://github.com/0day-ci/linux/commit/4a5a2cece4c5d9ac56322c6828efbea7fcd2e480 git remote add linux-review https://github.com/0day-ci/linux git fetch --no-tags linux-review Matthew-Wilcox-Oracle/mm-Add-pvmw_set_page/20220204-012111 git checkout 4a5a2cece4c5d9ac56322c6828efbea7fcd2e480 # save the config file to linux build tree mkdir build_dir COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=riscv SHELL=/bin/bash If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot <lkp@intel.com> All errors (new ones prefixed by >>): >> mm/page_vma_mapped.c:245:27: error: call to __compiletime_assert_255 declared with 'error' attribute: BUILD_BUG failed (pvmw->nr_pages >= HPAGE_PMD_NR)) { ^ include/linux/huge_mm.h:105:26: note: expanded from macro 'HPAGE_PMD_NR' #define HPAGE_PMD_NR (1<<HPAGE_PMD_ORDER) ^ include/linux/huge_mm.h:104:26: note: expanded from macro 'HPAGE_PMD_ORDER' #define HPAGE_PMD_ORDER (HPAGE_PMD_SHIFT-PAGE_SHIFT) ^ include/linux/huge_mm.h:331:28: note: expanded from macro 'HPAGE_PMD_SHIFT' #define HPAGE_PMD_SHIFT ({ BUILD_BUG(); 0; }) ^ note: (skipping 3 expansions in backtrace; use -fmacro-backtrace-limit=0 to see all) include/linux/compiler_types.h:334:2: note: expanded from macro '_compiletime_assert' __compiletime_assert(condition, msg, prefix, suffix) ^ include/linux/compiler_types.h:327:4: note: expanded from macro '__compiletime_assert' prefix ## suffix(); \ ^ <scratch space>:161:1: note: expanded from here __compiletime_assert_255 ^ 1 error generated. vim +/error +245 mm/page_vma_mapped.c 126 127 /** 128 * page_vma_mapped_walk - check if @pvmw->pfn is mapped in @pvmw->vma at 129 * @pvmw->address 130 * @pvmw: pointer to struct page_vma_mapped_walk. page, vma, address and flags 131 * must be set. pmd, pte and ptl must be NULL. 132 * 133 * Returns true if the page is mapped in the vma. @pvmw->pmd and @pvmw->pte point 134 * to relevant page table entries. @pvmw->ptl is locked. @pvmw->address is 135 * adjusted if needed (for PTE-mapped THPs). 136 * 137 * If @pvmw->pmd is set but @pvmw->pte is not, you have found PMD-mapped page 138 * (usually THP). For PTE-mapped THP, you should run page_vma_mapped_walk() in 139 * a loop to find all PTEs that map the THP. 140 * 141 * For HugeTLB pages, @pvmw->pte is set to the relevant page table entry 142 * regardless of which page table level the page is mapped at. @pvmw->pmd is 143 * NULL. 144 * 145 * Returns false if there are no more page table entries for the page in 146 * the vma. @pvmw->ptl is unlocked and @pvmw->pte is unmapped. 147 * 148 * If you need to stop the walk before page_vma_mapped_walk() returned false, 149 * use page_vma_mapped_walk_done(). It will do the housekeeping. 150 */ 151 bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) 152 { 153 struct vm_area_struct *vma = pvmw->vma; 154 struct mm_struct *mm = vma->vm_mm; 155 unsigned long end; 156 pgd_t *pgd; 157 p4d_t *p4d; 158 pud_t *pud; 159 pmd_t pmde; 160 161 /* The only possible pmd mapping has been handled on last iteration */ 162 if (pvmw->pmd && !pvmw->pte) 163 return not_found(pvmw); 164 165 if (unlikely(is_vm_hugetlb_page(vma))) { 166 unsigned long size = pvmw->nr_pages * PAGE_SIZE; 167 /* The only possible mapping was handled on last iteration */ 168 if (pvmw->pte) 169 return not_found(pvmw); 170 171 /* when pud is not present, pte will be NULL */ 172 pvmw->pte = huge_pte_offset(mm, pvmw->address, size); 173 if (!pvmw->pte) 174 return false; 175 176 pvmw->ptl = huge_pte_lockptr(size_to_hstate(size), mm, 177 pvmw->pte); 178 spin_lock(pvmw->ptl); 179 if (!check_pte(pvmw)) 180 return not_found(pvmw); 181 return true; 182 } 183 184 end = vma_address_end(pvmw); 185 if (pvmw->pte) 186 goto next_pte; 187 restart: 188 do { 189 pgd = pgd_offset(mm, pvmw->address); 190 if (!pgd_present(*pgd)) { 191 step_forward(pvmw, PGDIR_SIZE); 192 continue; 193 } 194 p4d = p4d_offset(pgd, pvmw->address); 195 if (!p4d_present(*p4d)) { 196 step_forward(pvmw, P4D_SIZE); 197 continue; 198 } 199 pud = pud_offset(p4d, pvmw->address); 200 if (!pud_present(*pud)) { 201 step_forward(pvmw, PUD_SIZE); 202 continue; 203 } 204 205 pvmw->pmd = pmd_offset(pud, pvmw->address); 206 /* 207 * Make sure the pmd value isn't cached in a register by the 208 * compiler and used as a stale value after we've observed a 209 * subsequent update. 210 */ 211 pmde = READ_ONCE(*pvmw->pmd); 212 213 if (pmd_trans_huge(pmde) || is_pmd_migration_entry(pmde)) { 214 pvmw->ptl = pmd_lock(mm, pvmw->pmd); 215 pmde = *pvmw->pmd; 216 if (likely(pmd_trans_huge(pmde))) { 217 if (pvmw->flags & PVMW_MIGRATION) 218 return not_found(pvmw); 219 if (!check_pmd(pmd_pfn(pmde), pvmw)) 220 return not_found(pvmw); 221 return true; 222 } 223 if (!pmd_present(pmde)) { 224 swp_entry_t entry; 225 226 if (!thp_migration_supported() || 227 !(pvmw->flags & PVMW_MIGRATION)) 228 return not_found(pvmw); 229 entry = pmd_to_swp_entry(pmde); 230 if (!is_migration_entry(entry) || 231 !check_pmd(swp_offset(entry), pvmw)) 232 return not_found(pvmw); 233 return true; 234 } 235 /* THP pmd was split under us: handle on pte level */ 236 spin_unlock(pvmw->ptl); 237 pvmw->ptl = NULL; 238 } else if (!pmd_present(pmde)) { 239 /* 240 * If PVMW_SYNC, take and drop THP pmd lock so that we 241 * cannot return prematurely, while zap_huge_pmd() has 242 * cleared *pmd but not decremented compound_mapcount(). 243 */ 244 if ((pvmw->flags & PVMW_SYNC) && > 245 (pvmw->nr_pages >= HPAGE_PMD_NR)) { 246 spinlock_t *ptl = pmd_lock(mm, pvmw->pmd); 247 248 spin_unlock(ptl); 249 } 250 step_forward(pvmw, PMD_SIZE); 251 continue; 252 } 253 if (!map_pte(pvmw)) 254 goto next_pte; 255 this_pte: 256 if (check_pte(pvmw)) 257 return true; 258 next_pte: 259 do { 260 pvmw->address += PAGE_SIZE; 261 if (pvmw->address >= end) 262 return not_found(pvmw); 263 /* Did we cross page table boundary? */ 264 if ((pvmw->address & (PMD_SIZE - PAGE_SIZE)) == 0) { 265 if (pvmw->ptl) { 266 spin_unlock(pvmw->ptl); 267 pvmw->ptl = NULL; 268 } 269 pte_unmap(pvmw->pte); 270 pvmw->pte = NULL; 271 goto restart; 272 } 273 pvmw->pte++; 274 if ((pvmw->flags & PVMW_SYNC) && !pvmw->ptl) { 275 pvmw->ptl = pte_lockptr(mm, pvmw->pmd); 276 spin_lock(pvmw->ptl); 277 } 278 } while (pte_none(*pvmw->pte)); 279 280 if (!pvmw->ptl) { 281 pvmw->ptl = pte_lockptr(mm, pvmw->pmd); 282 spin_lock(pvmw->ptl); 283 } 284 goto this_pte; 285 } while (pvmw->address < end); 286 287 return false; 288 } 289 --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
Hi "Matthew, Thank you for the patch! Yet something to improve: [auto build test ERROR on linus/master] [also build test ERROR on tip/perf/core v5.17-rc2 next-20220203] [cannot apply to hnaz-mm/master] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/0day-ci/linux/commits/Matthew-Wilcox-Oracle/mm-Add-pvmw_set_page/20220204-012111 base: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git c36c04c2e132fc39f6b658bf607aed4425427fd7 config: hexagon-randconfig-r045-20220130 (https://download.01.org/0day-ci/archive/20220204/202202040601.7rVVBoVe-lkp@intel.com/config) compiler: clang version 15.0.0 (https://github.com/llvm/llvm-project a73e4ce6a59b01f0e37037761c1e6889d539d233) reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # https://github.com/0day-ci/linux/commit/4a5a2cece4c5d9ac56322c6828efbea7fcd2e480 git remote add linux-review https://github.com/0day-ci/linux git fetch --no-tags linux-review Matthew-Wilcox-Oracle/mm-Add-pvmw_set_page/20220204-012111 git checkout 4a5a2cece4c5d9ac56322c6828efbea7fcd2e480 # save the config file to linux build tree mkdir build_dir COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=hexagon SHELL=/bin/bash If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot <lkp@intel.com> All errors (new ones prefixed by >>): >> mm/page_vma_mapped.c:219:20: error: implicit declaration of function 'pmd_pfn' [-Werror,-Wimplicit-function-declaration] if (!check_pmd(pmd_pfn(pmde), pvmw)) ^ mm/page_vma_mapped.c:219:20: note: did you mean 'pmd_off'? include/linux/pgtable.h:149:22: note: 'pmd_off' declared here static inline pmd_t *pmd_off(struct mm_struct *mm, unsigned long va) ^ 1 error generated. vim +/pmd_pfn +219 mm/page_vma_mapped.c 126 127 /** 128 * page_vma_mapped_walk - check if @pvmw->pfn is mapped in @pvmw->vma at 129 * @pvmw->address 130 * @pvmw: pointer to struct page_vma_mapped_walk. page, vma, address and flags 131 * must be set. pmd, pte and ptl must be NULL. 132 * 133 * Returns true if the page is mapped in the vma. @pvmw->pmd and @pvmw->pte point 134 * to relevant page table entries. @pvmw->ptl is locked. @pvmw->address is 135 * adjusted if needed (for PTE-mapped THPs). 136 * 137 * If @pvmw->pmd is set but @pvmw->pte is not, you have found PMD-mapped page 138 * (usually THP). For PTE-mapped THP, you should run page_vma_mapped_walk() in 139 * a loop to find all PTEs that map the THP. 140 * 141 * For HugeTLB pages, @pvmw->pte is set to the relevant page table entry 142 * regardless of which page table level the page is mapped at. @pvmw->pmd is 143 * NULL. 144 * 145 * Returns false if there are no more page table entries for the page in 146 * the vma. @pvmw->ptl is unlocked and @pvmw->pte is unmapped. 147 * 148 * If you need to stop the walk before page_vma_mapped_walk() returned false, 149 * use page_vma_mapped_walk_done(). It will do the housekeeping. 150 */ 151 bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) 152 { 153 struct vm_area_struct *vma = pvmw->vma; 154 struct mm_struct *mm = vma->vm_mm; 155 unsigned long end; 156 pgd_t *pgd; 157 p4d_t *p4d; 158 pud_t *pud; 159 pmd_t pmde; 160 161 /* The only possible pmd mapping has been handled on last iteration */ 162 if (pvmw->pmd && !pvmw->pte) 163 return not_found(pvmw); 164 165 if (unlikely(is_vm_hugetlb_page(vma))) { 166 unsigned long size = pvmw->nr_pages * PAGE_SIZE; 167 /* The only possible mapping was handled on last iteration */ 168 if (pvmw->pte) 169 return not_found(pvmw); 170 171 /* when pud is not present, pte will be NULL */ 172 pvmw->pte = huge_pte_offset(mm, pvmw->address, size); 173 if (!pvmw->pte) 174 return false; 175 176 pvmw->ptl = huge_pte_lockptr(size_to_hstate(size), mm, 177 pvmw->pte); 178 spin_lock(pvmw->ptl); 179 if (!check_pte(pvmw)) 180 return not_found(pvmw); 181 return true; 182 } 183 184 end = vma_address_end(pvmw); 185 if (pvmw->pte) 186 goto next_pte; 187 restart: 188 do { 189 pgd = pgd_offset(mm, pvmw->address); 190 if (!pgd_present(*pgd)) { 191 step_forward(pvmw, PGDIR_SIZE); 192 continue; 193 } 194 p4d = p4d_offset(pgd, pvmw->address); 195 if (!p4d_present(*p4d)) { 196 step_forward(pvmw, P4D_SIZE); 197 continue; 198 } 199 pud = pud_offset(p4d, pvmw->address); 200 if (!pud_present(*pud)) { 201 step_forward(pvmw, PUD_SIZE); 202 continue; 203 } 204 205 pvmw->pmd = pmd_offset(pud, pvmw->address); 206 /* 207 * Make sure the pmd value isn't cached in a register by the 208 * compiler and used as a stale value after we've observed a 209 * subsequent update. 210 */ 211 pmde = READ_ONCE(*pvmw->pmd); 212 213 if (pmd_trans_huge(pmde) || is_pmd_migration_entry(pmde)) { 214 pvmw->ptl = pmd_lock(mm, pvmw->pmd); 215 pmde = *pvmw->pmd; 216 if (likely(pmd_trans_huge(pmde))) { 217 if (pvmw->flags & PVMW_MIGRATION) 218 return not_found(pvmw); > 219 if (!check_pmd(pmd_pfn(pmde), pvmw)) 220 return not_found(pvmw); 221 return true; 222 } 223 if (!pmd_present(pmde)) { 224 swp_entry_t entry; 225 226 if (!thp_migration_supported() || 227 !(pvmw->flags & PVMW_MIGRATION)) 228 return not_found(pvmw); 229 entry = pmd_to_swp_entry(pmde); 230 if (!is_migration_entry(entry) || 231 !check_pmd(swp_offset(entry), pvmw)) 232 return not_found(pvmw); 233 return true; 234 } 235 /* THP pmd was split under us: handle on pte level */ 236 spin_unlock(pvmw->ptl); 237 pvmw->ptl = NULL; 238 } else if (!pmd_present(pmde)) { 239 /* 240 * If PVMW_SYNC, take and drop THP pmd lock so that we 241 * cannot return prematurely, while zap_huge_pmd() has 242 * cleared *pmd but not decremented compound_mapcount(). 243 */ 244 if ((pvmw->flags & PVMW_SYNC) && 245 (pvmw->nr_pages >= HPAGE_PMD_NR)) { 246 spinlock_t *ptl = pmd_lock(mm, pvmw->pmd); 247 248 spin_unlock(ptl); 249 } 250 step_forward(pvmw, PMD_SIZE); 251 continue; 252 } 253 if (!map_pte(pvmw)) 254 goto next_pte; 255 this_pte: 256 if (check_pte(pvmw)) 257 return true; 258 next_pte: 259 do { 260 pvmw->address += PAGE_SIZE; 261 if (pvmw->address >= end) 262 return not_found(pvmw); 263 /* Did we cross page table boundary? */ 264 if ((pvmw->address & (PMD_SIZE - PAGE_SIZE)) == 0) { 265 if (pvmw->ptl) { 266 spin_unlock(pvmw->ptl); 267 pvmw->ptl = NULL; 268 } 269 pte_unmap(pvmw->pte); 270 pvmw->pte = NULL; 271 goto restart; 272 } 273 pvmw->pte++; 274 if ((pvmw->flags & PVMW_SYNC) && !pvmw->ptl) { 275 pvmw->ptl = pte_lockptr(mm, pvmw->pmd); 276 spin_lock(pvmw->ptl); 277 } 278 } while (pte_none(*pvmw->pte)); 279 280 if (!pvmw->ptl) { 281 pvmw->ptl = pte_lockptr(mm, pvmw->pmd); 282 spin_lock(pvmw->ptl); 283 } 284 goto this_pte; 285 } while (pvmw->address < end); 286 287 return false; 288 } 289 --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index d1897a69c540..6ba2f8e74fbb 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -970,6 +970,11 @@ static inline struct hstate *page_hstate(struct page *page) return NULL; } +static inline struct hstate *size_to_hstate(unsigned long size) +{ + return NULL; +} + static inline unsigned long huge_page_size(struct hstate *h) { return PAGE_SIZE; diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 003bb5775bb1..6c0ebbd96e95 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -11,6 +11,7 @@ #include <linux/rwsem.h> #include <linux/memcontrol.h> #include <linux/highmem.h> +#include <linux/pagemap.h> /* * The anon_vma heads a list of private "related" vmas, to scan if @@ -200,11 +201,13 @@ int make_device_exclusive_range(struct mm_struct *mm, unsigned long start, /* Avoid racy checks */ #define PVMW_SYNC (1 << 0) -/* Look for migarion entries rather than present PTEs */ +/* Look for migration entries rather than present PTEs */ #define PVMW_MIGRATION (1 << 1) struct page_vma_mapped_walk { - struct page *page; + unsigned long pfn; + unsigned long nr_pages; + pgoff_t pgoff; struct vm_area_struct *vma; unsigned long address; pmd_t *pmd; @@ -216,13 +219,15 @@ struct page_vma_mapped_walk { static inline void pvmw_set_page(struct page_vma_mapped_walk *pvmw, struct page *page) { - pvmw->page = page; + pvmw->pfn = page_to_pfn(page); + pvmw->nr_pages = compound_nr(page); + pvmw->pgoff = page_to_pgoff(page); } static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw) { /* HugeTLB pte is set to the relevant page table entry without pte_mapped. */ - if (pvmw->pte && !PageHuge(pvmw->page)) + if (pvmw->pte && !is_vm_hugetlb_page(pvmw->vma)) pte_unmap(pvmw->pte); if (pvmw->ptl) spin_unlock(pvmw->ptl); diff --git a/mm/internal.h b/mm/internal.h index b7a2195c12b1..7f1db0f1a8bc 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -10,6 +10,7 @@ #include <linux/fs.h> #include <linux/mm.h> #include <linux/pagemap.h> +#include <linux/rmap.h> #include <linux/tracepoint-defs.h> struct folio_batch; @@ -459,18 +460,20 @@ vma_address(struct page *page, struct vm_area_struct *vma) } /* - * Then at what user virtual address will none of the page be found in vma? + * Then at what user virtual address will none of the range be found in vma? * Assumes that vma_address() already returned a good starting address. - * If page is a compound head, the entire compound page is considered. */ -static inline unsigned long -vma_address_end(struct page *page, struct vm_area_struct *vma) +static inline unsigned long vma_address_end(struct page_vma_mapped_walk *pvmw) { + struct vm_area_struct *vma = pvmw->vma; pgoff_t pgoff; unsigned long address; - VM_BUG_ON_PAGE(PageKsm(page), page); /* KSM page->index unusable */ - pgoff = page_to_pgoff(page) + compound_nr(page); + /* Common case, plus ->pgoff is invalid for KSM */ + if (pvmw->nr_pages == 1) + return pvmw->address + PAGE_SIZE; + + pgoff = pvmw->pgoff + pvmw->nr_pages; address = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); /* Check for address beyond vma (or wrapped through 0?) */ if (address < vma->vm_start || address > vma->vm_end) diff --git a/mm/migrate.c b/mm/migrate.c index 07464fd45925..766dc67874a1 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -191,7 +191,7 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma, if (PageKsm(page)) new = page; else - new = page - pvmw.page->index + + new = page - pvmw.pgoff + linear_page_index(vma, pvmw.address); #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index f7b331081791..228d5103e6d1 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -53,18 +53,6 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw) return true; } -static inline bool pfn_is_match(struct page *page, unsigned long pfn) -{ - unsigned long page_pfn = page_to_pfn(page); - - /* normal page and hugetlbfs page */ - if (!PageTransCompound(page) || PageHuge(page)) - return page_pfn == pfn; - - /* THP can be referenced by any subpage */ - return pfn >= page_pfn && pfn - page_pfn < thp_nr_pages(page); -} - /** * check_pte - check if @pvmw->page is mapped at the @pvmw->pte * @pvmw: page_vma_mapped_walk struct, includes a pair pte and page for checking @@ -116,7 +104,17 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw) pfn = pte_pfn(*pvmw->pte); } - return pfn_is_match(pvmw->page, pfn); + return (pfn - pvmw->pfn) < pvmw->nr_pages; +} + +/* Returns true if the two ranges overlap. Careful to not overflow. */ +static bool check_pmd(unsigned long pfn, struct page_vma_mapped_walk *pvmw) +{ + if ((pfn + HPAGE_PMD_NR - 1) < pvmw->pfn) + return false; + if (pfn > pvmw->pfn + pvmw->nr_pages - 1) + return false; + return true; } static void step_forward(struct page_vma_mapped_walk *pvmw, unsigned long size) @@ -127,7 +125,7 @@ static void step_forward(struct page_vma_mapped_walk *pvmw, unsigned long size) } /** - * page_vma_mapped_walk - check if @pvmw->page is mapped in @pvmw->vma at + * page_vma_mapped_walk - check if @pvmw->pfn is mapped in @pvmw->vma at * @pvmw->address * @pvmw: pointer to struct page_vma_mapped_walk. page, vma, address and flags * must be set. pmd, pte and ptl must be NULL. @@ -152,8 +150,8 @@ static void step_forward(struct page_vma_mapped_walk *pvmw, unsigned long size) */ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) { - struct mm_struct *mm = pvmw->vma->vm_mm; - struct page *page = pvmw->page; + struct vm_area_struct *vma = pvmw->vma; + struct mm_struct *mm = vma->vm_mm; unsigned long end; pgd_t *pgd; p4d_t *p4d; @@ -164,32 +162,26 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) if (pvmw->pmd && !pvmw->pte) return not_found(pvmw); - if (unlikely(PageHuge(page))) { + if (unlikely(is_vm_hugetlb_page(vma))) { + unsigned long size = pvmw->nr_pages * PAGE_SIZE; /* The only possible mapping was handled on last iteration */ if (pvmw->pte) return not_found(pvmw); /* when pud is not present, pte will be NULL */ - pvmw->pte = huge_pte_offset(mm, pvmw->address, page_size(page)); + pvmw->pte = huge_pte_offset(mm, pvmw->address, size); if (!pvmw->pte) return false; - pvmw->ptl = huge_pte_lockptr(page_hstate(page), mm, pvmw->pte); + pvmw->ptl = huge_pte_lockptr(size_to_hstate(size), mm, + pvmw->pte); spin_lock(pvmw->ptl); if (!check_pte(pvmw)) return not_found(pvmw); return true; } - /* - * Seek to next pte only makes sense for THP. - * But more important than that optimization, is to filter out - * any PageKsm page: whose page->index misleads vma_address() - * and vma_address_end() to disaster. - */ - end = PageTransCompound(page) ? - vma_address_end(page, pvmw->vma) : - pvmw->address + PAGE_SIZE; + end = vma_address_end(pvmw); if (pvmw->pte) goto next_pte; restart: @@ -224,7 +216,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) if (likely(pmd_trans_huge(pmde))) { if (pvmw->flags & PVMW_MIGRATION) return not_found(pvmw); - if (pmd_page(pmde) != page) + if (!check_pmd(pmd_pfn(pmde), pvmw)) return not_found(pvmw); return true; } @@ -236,7 +228,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) return not_found(pvmw); entry = pmd_to_swp_entry(pmde); if (!is_migration_entry(entry) || - pfn_swap_entry_to_page(entry) != page) + !check_pmd(swp_offset(entry), pvmw)) return not_found(pvmw); return true; } @@ -250,7 +242,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) * cleared *pmd but not decremented compound_mapcount(). */ if ((pvmw->flags & PVMW_SYNC) && - PageTransCompound(page)) { + (pvmw->nr_pages >= HPAGE_PMD_NR)) { spinlock_t *ptl = pmd_lock(mm, pvmw->pmd); spin_unlock(ptl); @@ -307,7 +299,8 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma) { struct page_vma_mapped_walk pvmw = { - .page = page, + .pfn = page_to_pfn(page), + .nr_pages = 1, .vma = vma, .flags = PVMW_SYNC, }; diff --git a/mm/rmap.c b/mm/rmap.c index fa8478372e94..d62a6fcef318 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -946,7 +946,7 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma, */ mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_PAGE, 0, vma, vma->vm_mm, address, - vma_address_end(page, vma)); + vma_address_end(&pvmw)); mmu_notifier_invalidate_range_start(&range); while (page_vma_mapped_walk(&pvmw)) { @@ -1453,8 +1453,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, * Note that the page can not be free in this function as call of * try_to_unmap() must hold a reference on the page. */ - range.end = PageKsm(page) ? - address + PAGE_SIZE : vma_address_end(page, vma); + range.end = vma_address_end(&pvmw); mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm, address, range.end); if (PageHuge(page)) { @@ -1757,8 +1756,7 @@ static bool try_to_migrate_one(struct page *page, struct vm_area_struct *vma, * Note that the page can not be free in this function as call of * try_to_unmap() must hold a reference on the page. */ - range.end = PageKsm(page) ? - address + PAGE_SIZE : vma_address_end(page, vma); + range.end = vma_address_end(&pvmw); mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm, address, range.end); if (PageHuge(page)) {
page_mapped_in_vma() really just wants to walk one page, but as the code stands, if passed the head page of a compound page, it will walk every page in the compound page. Extract pfn/nr_pages/pgoff from the struct page early, so they can be overridden by page_mapped_in_vma(). Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> --- include/linux/hugetlb.h | 5 ++++ include/linux/rmap.h | 13 +++++++--- mm/internal.h | 15 ++++++----- mm/migrate.c | 2 +- mm/page_vma_mapped.c | 57 ++++++++++++++++++----------------------- mm/rmap.c | 8 +++--- 6 files changed, 52 insertions(+), 48 deletions(-)