Message ID | 20240327141034.3712697-4-wangkefeng.wang@huawei.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm: remove isolate_lru_page() and isolate_movable_page() | expand |
Hi Kefeng,
kernel test robot noticed the following build errors:
[auto build test ERROR on linus/master]
[also build test ERROR on v6.9-rc1]
[cannot apply to akpm-mm/mm-everything next-20240328]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Kefeng-Wang/mm-migrate-add-isolate_movable_folio/20240327-221513
base: linus/master
patch link: https://lore.kernel.org/r/20240327141034.3712697-4-wangkefeng.wang%40huawei.com
patch subject: [PATCH 3/6] mm: remove isolate_lru_page()
config: x86_64-rhel-8.3-rust (https://download.01.org/0day-ci/archive/20240328/202403282057.pIA3kJoz-lkp@intel.com/config)
compiler: clang version 17.0.6 (https://github.com/llvm/llvm-project 6009708b4367171ccdbf4b5905cb6a803753fe18)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240328/202403282057.pIA3kJoz-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202403282057.pIA3kJoz-lkp@intel.com/
All errors (new ones prefixed by >>):
>> mm/migrate_device.c:388:9: error: call to undeclared function 'isolate_lru_page'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
388 | if (!isolate_lru_page(page)) {
| ^
mm/migrate_device.c:388:9: note: did you mean '__isolate_free_page'?
mm/internal.h:487:12: note: '__isolate_free_page' declared here
487 | extern int __isolate_free_page(struct page *page, unsigned int order);
| ^
1 error generated.
vim +/isolate_lru_page +388 mm/migrate_device.c
76cbbead253ddc Christoph Hellwig 2022-02-16 355
76cbbead253ddc Christoph Hellwig 2022-02-16 356 /*
44af0b45d58d7b Alistair Popple 2022-11-11 357 * Unmaps pages for migration. Returns number of source pfns marked as
44af0b45d58d7b Alistair Popple 2022-11-11 358 * migrating.
76cbbead253ddc Christoph Hellwig 2022-02-16 359 */
241f6885965683 Alistair Popple 2022-09-28 360 static unsigned long migrate_device_unmap(unsigned long *src_pfns,
241f6885965683 Alistair Popple 2022-09-28 361 unsigned long npages,
241f6885965683 Alistair Popple 2022-09-28 362 struct page *fault_page)
76cbbead253ddc Christoph Hellwig 2022-02-16 363 {
76cbbead253ddc Christoph Hellwig 2022-02-16 364 unsigned long i, restore = 0;
76cbbead253ddc Christoph Hellwig 2022-02-16 365 bool allow_drain = true;
241f6885965683 Alistair Popple 2022-09-28 366 unsigned long unmapped = 0;
76cbbead253ddc Christoph Hellwig 2022-02-16 367
76cbbead253ddc Christoph Hellwig 2022-02-16 368 lru_add_drain();
76cbbead253ddc Christoph Hellwig 2022-02-16 369
76cbbead253ddc Christoph Hellwig 2022-02-16 370 for (i = 0; i < npages; i++) {
241f6885965683 Alistair Popple 2022-09-28 371 struct page *page = migrate_pfn_to_page(src_pfns[i]);
4b8554c527f3cf Matthew Wilcox (Oracle 2022-01-28 372) struct folio *folio;
76cbbead253ddc Christoph Hellwig 2022-02-16 373
44af0b45d58d7b Alistair Popple 2022-11-11 374 if (!page) {
44af0b45d58d7b Alistair Popple 2022-11-11 375 if (src_pfns[i] & MIGRATE_PFN_MIGRATE)
44af0b45d58d7b Alistair Popple 2022-11-11 376 unmapped++;
76cbbead253ddc Christoph Hellwig 2022-02-16 377 continue;
44af0b45d58d7b Alistair Popple 2022-11-11 378 }
76cbbead253ddc Christoph Hellwig 2022-02-16 379
76cbbead253ddc Christoph Hellwig 2022-02-16 380 /* ZONE_DEVICE pages are not on LRU */
76cbbead253ddc Christoph Hellwig 2022-02-16 381 if (!is_zone_device_page(page)) {
76cbbead253ddc Christoph Hellwig 2022-02-16 382 if (!PageLRU(page) && allow_drain) {
1fec6890bf2247 Matthew Wilcox (Oracle 2023-06-21 383) /* Drain CPU's lru cache */
76cbbead253ddc Christoph Hellwig 2022-02-16 384 lru_add_drain_all();
76cbbead253ddc Christoph Hellwig 2022-02-16 385 allow_drain = false;
76cbbead253ddc Christoph Hellwig 2022-02-16 386 }
76cbbead253ddc Christoph Hellwig 2022-02-16 387
f7f9c00dfafffd Baolin Wang 2023-02-15 @388 if (!isolate_lru_page(page)) {
241f6885965683 Alistair Popple 2022-09-28 389 src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
76cbbead253ddc Christoph Hellwig 2022-02-16 390 restore++;
76cbbead253ddc Christoph Hellwig 2022-02-16 391 continue;
76cbbead253ddc Christoph Hellwig 2022-02-16 392 }
76cbbead253ddc Christoph Hellwig 2022-02-16 393
76cbbead253ddc Christoph Hellwig 2022-02-16 394 /* Drop the reference we took in collect */
76cbbead253ddc Christoph Hellwig 2022-02-16 395 put_page(page);
76cbbead253ddc Christoph Hellwig 2022-02-16 396 }
76cbbead253ddc Christoph Hellwig 2022-02-16 397
4b8554c527f3cf Matthew Wilcox (Oracle 2022-01-28 398) folio = page_folio(page);
4b8554c527f3cf Matthew Wilcox (Oracle 2022-01-28 399) if (folio_mapped(folio))
4b8554c527f3cf Matthew Wilcox (Oracle 2022-01-28 400) try_to_migrate(folio, 0);
76cbbead253ddc Christoph Hellwig 2022-02-16 401
16ce101db85db6 Alistair Popple 2022-09-28 402 if (page_mapped(page) ||
241f6885965683 Alistair Popple 2022-09-28 403 !migrate_vma_check_page(page, fault_page)) {
76cbbead253ddc Christoph Hellwig 2022-02-16 404 if (!is_zone_device_page(page)) {
76cbbead253ddc Christoph Hellwig 2022-02-16 405 get_page(page);
76cbbead253ddc Christoph Hellwig 2022-02-16 406 putback_lru_page(page);
76cbbead253ddc Christoph Hellwig 2022-02-16 407 }
76cbbead253ddc Christoph Hellwig 2022-02-16 408
241f6885965683 Alistair Popple 2022-09-28 409 src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
76cbbead253ddc Christoph Hellwig 2022-02-16 410 restore++;
76cbbead253ddc Christoph Hellwig 2022-02-16 411 continue;
76cbbead253ddc Christoph Hellwig 2022-02-16 412 }
241f6885965683 Alistair Popple 2022-09-28 413
241f6885965683 Alistair Popple 2022-09-28 414 unmapped++;
76cbbead253ddc Christoph Hellwig 2022-02-16 415 }
76cbbead253ddc Christoph Hellwig 2022-02-16 416
76cbbead253ddc Christoph Hellwig 2022-02-16 417 for (i = 0; i < npages && restore; i++) {
241f6885965683 Alistair Popple 2022-09-28 418 struct page *page = migrate_pfn_to_page(src_pfns[i]);
4eecb8b9163df8 Matthew Wilcox (Oracle 2022-01-28 419) struct folio *folio;
76cbbead253ddc Christoph Hellwig 2022-02-16 420
241f6885965683 Alistair Popple 2022-09-28 421 if (!page || (src_pfns[i] & MIGRATE_PFN_MIGRATE))
76cbbead253ddc Christoph Hellwig 2022-02-16 422 continue;
76cbbead253ddc Christoph Hellwig 2022-02-16 423
4eecb8b9163df8 Matthew Wilcox (Oracle 2022-01-28 424) folio = page_folio(page);
4eecb8b9163df8 Matthew Wilcox (Oracle 2022-01-28 425) remove_migration_ptes(folio, folio, false);
76cbbead253ddc Christoph Hellwig 2022-02-16 426
241f6885965683 Alistair Popple 2022-09-28 427 src_pfns[i] = 0;
4eecb8b9163df8 Matthew Wilcox (Oracle 2022-01-28 428) folio_unlock(folio);
4eecb8b9163df8 Matthew Wilcox (Oracle 2022-01-28 429) folio_put(folio);
76cbbead253ddc Christoph Hellwig 2022-02-16 430 restore--;
76cbbead253ddc Christoph Hellwig 2022-02-16 431 }
241f6885965683 Alistair Popple 2022-09-28 432
241f6885965683 Alistair Popple 2022-09-28 433 return unmapped;
241f6885965683 Alistair Popple 2022-09-28 434 }
241f6885965683 Alistair Popple 2022-09-28 435
On 2024/3/28 20:22, kernel test robot wrote: > Hi Kefeng, > > kernel test robot noticed the following build errors: > > [auto build test ERROR on linus/master] > [also build test ERROR on v6.9-rc1] > [cannot apply to akpm-mm/mm-everything next-20240328] > [If your patch is applied to the wrong git tree, kindly drop us a note. > And when submitting patch, we suggest to use '--base' as documented in > https://git-scm.com/docs/git-format-patch#_base_tree_information] > > url: https://github.com/intel-lab-lkp/linux/commits/Kefeng-Wang/mm-migrate-add-isolate_movable_folio/20240327-221513 > base: linus/master > patch link: https://lore.kernel.org/r/20240327141034.3712697-4-wangkefeng.wang%40huawei.com > patch subject: [PATCH 3/6] mm: remove isolate_lru_page() > config: x86_64-rhel-8.3-rust (https://download.01.org/0day-ci/archive/20240328/202403282057.pIA3kJoz-lkp@intel.com/config) > compiler: clang version 17.0.6 (https://github.com/llvm/llvm-project 6009708b4367171ccdbf4b5905cb6a803753fe18) > reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240328/202403282057.pIA3kJoz-lkp@intel.com/reproduce) > > If you fix the issue in a separate patch/commit (i.e. not just a new version of > the same patch/commit), kindly add following tags > | Reported-by: kernel test robot <lkp@intel.com> > | Closes: https://lore.kernel.org/oe-kbuild-all/202403282057.pIA3kJoz-lkp@intel.com/ > > All errors (new ones prefixed by >>): This changed locally, but missing this when rebase on new branch, will fix. > >>> mm/migrate_device.c:388:9: error: call to undeclared function 'isolate_lru_page'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] > 388 | if (!isolate_lru_page(page)) { > | ^ > mm/migrate_device.c:388:9: note: did you mean '__isolate_free_page'? > mm/internal.h:487:12: note: '__isolate_free_page' declared here > 487 | extern int __isolate_free_page(struct page *page, unsigned int order); > | ^ > 1 error generated. > >
Hi Kefeng,
kernel test robot noticed the following build errors:
[auto build test ERROR on linus/master]
[also build test ERROR on v6.9-rc1]
[cannot apply to akpm-mm/mm-everything next-20240328]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Kefeng-Wang/mm-migrate-add-isolate_movable_folio/20240327-221513
base: linus/master
patch link: https://lore.kernel.org/r/20240327141034.3712697-4-wangkefeng.wang%40huawei.com
patch subject: [PATCH 3/6] mm: remove isolate_lru_page()
config: x86_64-randconfig-013-20240328 (https://download.01.org/0day-ci/archive/20240328/202403282357.bFSsmYuH-lkp@intel.com/config)
compiler: gcc-10 (Ubuntu 10.5.0-1ubuntu1) 10.5.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240328/202403282357.bFSsmYuH-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202403282357.bFSsmYuH-lkp@intel.com/
All errors (new ones prefixed by >>):
mm/migrate_device.c: In function 'migrate_device_unmap':
>> mm/migrate_device.c:388:9: error: implicit declaration of function 'isolate_lru_page' [-Werror=implicit-function-declaration]
388 | if (!isolate_lru_page(page)) {
| ^~~~~~~~~~~~~~~~
cc1: some warnings being treated as errors
vim +/isolate_lru_page +388 mm/migrate_device.c
76cbbead253ddc Christoph Hellwig 2022-02-16 355
76cbbead253ddc Christoph Hellwig 2022-02-16 356 /*
44af0b45d58d7b Alistair Popple 2022-11-11 357 * Unmaps pages for migration. Returns number of source pfns marked as
44af0b45d58d7b Alistair Popple 2022-11-11 358 * migrating.
76cbbead253ddc Christoph Hellwig 2022-02-16 359 */
241f6885965683 Alistair Popple 2022-09-28 360 static unsigned long migrate_device_unmap(unsigned long *src_pfns,
241f6885965683 Alistair Popple 2022-09-28 361 unsigned long npages,
241f6885965683 Alistair Popple 2022-09-28 362 struct page *fault_page)
76cbbead253ddc Christoph Hellwig 2022-02-16 363 {
76cbbead253ddc Christoph Hellwig 2022-02-16 364 unsigned long i, restore = 0;
76cbbead253ddc Christoph Hellwig 2022-02-16 365 bool allow_drain = true;
241f6885965683 Alistair Popple 2022-09-28 366 unsigned long unmapped = 0;
76cbbead253ddc Christoph Hellwig 2022-02-16 367
76cbbead253ddc Christoph Hellwig 2022-02-16 368 lru_add_drain();
76cbbead253ddc Christoph Hellwig 2022-02-16 369
76cbbead253ddc Christoph Hellwig 2022-02-16 370 for (i = 0; i < npages; i++) {
241f6885965683 Alistair Popple 2022-09-28 371 struct page *page = migrate_pfn_to_page(src_pfns[i]);
4b8554c527f3cf Matthew Wilcox (Oracle 2022-01-28 372) struct folio *folio;
76cbbead253ddc Christoph Hellwig 2022-02-16 373
44af0b45d58d7b Alistair Popple 2022-11-11 374 if (!page) {
44af0b45d58d7b Alistair Popple 2022-11-11 375 if (src_pfns[i] & MIGRATE_PFN_MIGRATE)
44af0b45d58d7b Alistair Popple 2022-11-11 376 unmapped++;
76cbbead253ddc Christoph Hellwig 2022-02-16 377 continue;
44af0b45d58d7b Alistair Popple 2022-11-11 378 }
76cbbead253ddc Christoph Hellwig 2022-02-16 379
76cbbead253ddc Christoph Hellwig 2022-02-16 380 /* ZONE_DEVICE pages are not on LRU */
76cbbead253ddc Christoph Hellwig 2022-02-16 381 if (!is_zone_device_page(page)) {
76cbbead253ddc Christoph Hellwig 2022-02-16 382 if (!PageLRU(page) && allow_drain) {
1fec6890bf2247 Matthew Wilcox (Oracle 2023-06-21 383) /* Drain CPU's lru cache */
76cbbead253ddc Christoph Hellwig 2022-02-16 384 lru_add_drain_all();
76cbbead253ddc Christoph Hellwig 2022-02-16 385 allow_drain = false;
76cbbead253ddc Christoph Hellwig 2022-02-16 386 }
76cbbead253ddc Christoph Hellwig 2022-02-16 387
f7f9c00dfafffd Baolin Wang 2023-02-15 @388 if (!isolate_lru_page(page)) {
241f6885965683 Alistair Popple 2022-09-28 389 src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
76cbbead253ddc Christoph Hellwig 2022-02-16 390 restore++;
76cbbead253ddc Christoph Hellwig 2022-02-16 391 continue;
76cbbead253ddc Christoph Hellwig 2022-02-16 392 }
76cbbead253ddc Christoph Hellwig 2022-02-16 393
76cbbead253ddc Christoph Hellwig 2022-02-16 394 /* Drop the reference we took in collect */
76cbbead253ddc Christoph Hellwig 2022-02-16 395 put_page(page);
76cbbead253ddc Christoph Hellwig 2022-02-16 396 }
76cbbead253ddc Christoph Hellwig 2022-02-16 397
4b8554c527f3cf Matthew Wilcox (Oracle 2022-01-28 398) folio = page_folio(page);
4b8554c527f3cf Matthew Wilcox (Oracle 2022-01-28 399) if (folio_mapped(folio))
4b8554c527f3cf Matthew Wilcox (Oracle 2022-01-28 400) try_to_migrate(folio, 0);
76cbbead253ddc Christoph Hellwig 2022-02-16 401
16ce101db85db6 Alistair Popple 2022-09-28 402 if (page_mapped(page) ||
241f6885965683 Alistair Popple 2022-09-28 403 !migrate_vma_check_page(page, fault_page)) {
76cbbead253ddc Christoph Hellwig 2022-02-16 404 if (!is_zone_device_page(page)) {
76cbbead253ddc Christoph Hellwig 2022-02-16 405 get_page(page);
76cbbead253ddc Christoph Hellwig 2022-02-16 406 putback_lru_page(page);
76cbbead253ddc Christoph Hellwig 2022-02-16 407 }
76cbbead253ddc Christoph Hellwig 2022-02-16 408
241f6885965683 Alistair Popple 2022-09-28 409 src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
76cbbead253ddc Christoph Hellwig 2022-02-16 410 restore++;
76cbbead253ddc Christoph Hellwig 2022-02-16 411 continue;
76cbbead253ddc Christoph Hellwig 2022-02-16 412 }
241f6885965683 Alistair Popple 2022-09-28 413
241f6885965683 Alistair Popple 2022-09-28 414 unmapped++;
76cbbead253ddc Christoph Hellwig 2022-02-16 415 }
76cbbead253ddc Christoph Hellwig 2022-02-16 416
76cbbead253ddc Christoph Hellwig 2022-02-16 417 for (i = 0; i < npages && restore; i++) {
241f6885965683 Alistair Popple 2022-09-28 418 struct page *page = migrate_pfn_to_page(src_pfns[i]);
4eecb8b9163df8 Matthew Wilcox (Oracle 2022-01-28 419) struct folio *folio;
76cbbead253ddc Christoph Hellwig 2022-02-16 420
241f6885965683 Alistair Popple 2022-09-28 421 if (!page || (src_pfns[i] & MIGRATE_PFN_MIGRATE))
76cbbead253ddc Christoph Hellwig 2022-02-16 422 continue;
76cbbead253ddc Christoph Hellwig 2022-02-16 423
4eecb8b9163df8 Matthew Wilcox (Oracle 2022-01-28 424) folio = page_folio(page);
4eecb8b9163df8 Matthew Wilcox (Oracle 2022-01-28 425) remove_migration_ptes(folio, folio, false);
76cbbead253ddc Christoph Hellwig 2022-02-16 426
241f6885965683 Alistair Popple 2022-09-28 427 src_pfns[i] = 0;
4eecb8b9163df8 Matthew Wilcox (Oracle 2022-01-28 428) folio_unlock(folio);
4eecb8b9163df8 Matthew Wilcox (Oracle 2022-01-28 429) folio_put(folio);
76cbbead253ddc Christoph Hellwig 2022-02-16 430 restore--;
76cbbead253ddc Christoph Hellwig 2022-02-16 431 }
241f6885965683 Alistair Popple 2022-09-28 432
241f6885965683 Alistair Popple 2022-09-28 433 return unmapped;
241f6885965683 Alistair Popple 2022-09-28 434 }
241f6885965683 Alistair Popple 2022-09-28 435
diff --git a/Documentation/mm/page_migration.rst b/Documentation/mm/page_migration.rst index f1ce67a26615..0046bbbdc65d 100644 --- a/Documentation/mm/page_migration.rst +++ b/Documentation/mm/page_migration.rst @@ -67,8 +67,8 @@ In kernel use of migrate_pages() Lists of pages to be migrated are generated by scanning over pages and moving them into lists. This is done by - calling isolate_lru_page(). - Calling isolate_lru_page() increases the references to the page + calling folio_isolate_lru(). + Calling folio_isolate_lru() increases the references to the page so that it cannot vanish while the page migration occurs. It also prevents the swapper or other scans from encountering the page. @@ -86,7 +86,7 @@ How migrate_pages() works migrate_pages() does several passes over its list of pages. A page is moved if all references to a page are removable at the time. The page has -already been removed from the LRU via isolate_lru_page() and the refcount +already been removed from the LRU via folio_isolate_lru() and the refcount is increased so that the page cannot be freed while page migration occurs. Steps: diff --git a/Documentation/translations/zh_CN/mm/page_migration.rst b/Documentation/translations/zh_CN/mm/page_migration.rst index f95063826a15..8c8461c6cb9f 100644 --- a/Documentation/translations/zh_CN/mm/page_migration.rst +++ b/Documentation/translations/zh_CN/mm/page_migration.rst @@ -50,8 +50,8 @@ mbind()设置一个新的内存策略。一个进程的页面也可以通过sys_ 1. 从LRU中移除页面。 - 要迁移的页面列表是通过扫描页面并把它们移到列表中来生成的。这是通过调用 isolate_lru_page() - 来完成的。调用isolate_lru_page()增加了对该页的引用,这样在页面迁移发生时它就不会 + 要迁移的页面列表是通过扫描页面并把它们移到列表中来生成的。这是通过调用 folio_isolate_lru() + 来完成的。调用folio_isolate_lru()增加了对该页的引用,这样在页面迁移发生时它就不会 消失。它还可以防止交换器或其他扫描器遇到该页。 @@ -65,7 +65,7 @@ migrate_pages()如何工作 ======================= migrate_pages()对它的页面列表进行了多次处理。如果当时对一个页面的所有引用都可以被移除, -那么这个页面就会被移动。该页已经通过isolate_lru_page()从LRU中移除,并且refcount被 +那么这个页面就会被移动。该页已经通过folio_isolate_lru()从LRU中移除,并且refcount被 增加,以便在页面迁移发生时不释放该页。 步骤: diff --git a/mm/filemap.c b/mm/filemap.c index 7437b2bd75c1..2a03fbbf413a 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -113,7 +113,7 @@ * ->private_lock (try_to_unmap_one) * ->i_pages lock (try_to_unmap_one) * ->lruvec->lru_lock (follow_page->mark_page_accessed) - * ->lruvec->lru_lock (check_pte_range->isolate_lru_page) + * ->lruvec->lru_lock (check_pte_range->folio_isolate_lru) * ->private_lock (folio_remove_rmap_pte->set_page_dirty) * ->i_pages lock (folio_remove_rmap_pte->set_page_dirty) * bdi.wb->list_lock (folio_remove_rmap_pte->set_page_dirty) diff --git a/mm/folio-compat.c b/mm/folio-compat.c index 50412014f16f..95ad426b296a 100644 --- a/mm/folio-compat.c +++ b/mm/folio-compat.c @@ -105,13 +105,6 @@ struct page *grab_cache_page_write_begin(struct address_space *mapping, } EXPORT_SYMBOL(grab_cache_page_write_begin); -bool isolate_lru_page(struct page *page) -{ - if (WARN_RATELIMIT(PageTail(page), "trying to isolate tail page")) - return false; - return folio_isolate_lru((struct folio *)page); -} - void putback_lru_page(struct page *page) { folio_putback_lru(page_folio(page)); diff --git a/mm/internal.h b/mm/internal.h index 7e486f2c502c..7cdf7d3d83ea 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -292,7 +292,6 @@ extern unsigned long highest_memmap_pfn; /* * in mm/vmscan.c: */ -bool isolate_lru_page(struct page *page); bool folio_isolate_lru(struct folio *folio); void putback_lru_page(struct page *page); void folio_putback_lru(struct folio *folio); diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 38830174608f..e9b8b368f655 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -607,7 +607,7 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, } /* - * We can do it before isolate_lru_page because the + * We can do it before folio_isolate_lru because the * page can't be freed from under us. NOTE: PG_lock * is needed to serialize against split_huge_page * when invoked from the VM. @@ -1867,7 +1867,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, result = SCAN_FAIL; goto xa_unlocked; } - /* drain lru cache to help isolate_lru_page() */ + /* drain lru cache to help folio_isolate_lru() */ lru_add_drain(); page = folio_file_page(folio, index); } else if (trylock_page(page)) { @@ -1883,7 +1883,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, page_cache_sync_readahead(mapping, &file->f_ra, file, index, end - index); - /* drain lru cache to help isolate_lru_page() */ + /* drain lru cache to help folio_isolate_lru() */ lru_add_drain(); page = find_lock_page(mapping, index); if (unlikely(page == NULL)) { @@ -1990,7 +1990,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, * We control three references to the page: * - we hold a pin on it; * - one reference from page cache; - * - one from isolate_lru_page; + * - one from folio_isolate_lru; * If those are the only references, then any new usage of the * page will have to fetch it from the page cache. That requires * locking the page to handle truncate, so any new usage will be diff --git a/mm/migrate_device.c b/mm/migrate_device.c index c0547271eaaa..3a42624bb590 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -326,7 +326,7 @@ static bool migrate_vma_check_page(struct page *page, struct page *fault_page) { /* * One extra ref because caller holds an extra reference, either from - * isolate_lru_page() for a regular page, or migrate_vma_collect() for + * folio_isolate_lru() for a regular page, or migrate_vma_collect() for * a device page. */ int extra = 1 + (page == fault_page); diff --git a/mm/swap.c b/mm/swap.c index 500a09a48dfd..decd6d44b7ac 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -930,7 +930,7 @@ atomic_t lru_disable_count = ATOMIC_INIT(0); /* * lru_cache_disable() needs to be called before we start compiling - * a list of pages to be migrated using isolate_lru_page(). + * a list of pages to be migrated using folio_isolate_lru(). * It drains pages on LRU cache and then disable on all cpus until * lru_cache_enable is called. *
There are no more callers of isolate_lru_page(), remove it. Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> --- Documentation/mm/page_migration.rst | 6 +++--- Documentation/translations/zh_CN/mm/page_migration.rst | 6 +++--- mm/filemap.c | 2 +- mm/folio-compat.c | 7 ------- mm/internal.h | 1 - mm/khugepaged.c | 8 ++++---- mm/migrate_device.c | 2 +- mm/swap.c | 2 +- 8 files changed, 13 insertions(+), 21 deletions(-)