diff mbox series

[v2,1/3] mm: revert handling Non-LRU pages returned by follow_page

Message ID 20220814140534.363348-2-haiyue.wang@intel.com (mailing list archive)
State Superseded
Headers show
Series fix follow_page related issues | expand

Commit Message

Wang, Haiyue Aug. 14, 2022, 2:05 p.m. UTC
The commit
3218f8712d6b ("mm: handling Non-LRU pages returned by vm_normal_pages")
doesn't handle the follow_page with flag FOLL_GET correctly, this will
do get_page on page, it shouldn't just return directly without put_page.

So revert the related fix to prepare for clean patch to handle Non-LRU
pages returned by follow_page.

Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
---
 mm/huge_memory.c | 2 +-
 mm/ksm.c         | 6 +++---
 mm/migrate.c     | 4 ++--
 3 files changed, 6 insertions(+), 6 deletions(-)

Comments

David Hildenbrand Aug. 14, 2022, 4:30 p.m. UTC | #1
On 14.08.22 16:05, Haiyue Wang wrote:
> The commit
> 3218f8712d6b ("mm: handling Non-LRU pages returned by vm_normal_pages")
> doesn't handle the follow_page with flag FOLL_GET correctly, this will
> do get_page on page, it shouldn't just return directly without put_page.
> 
> So revert the related fix to prepare for clean patch to handle Non-LRU
> pages returned by follow_page.

What? Why?

Just fix it.
Wang, Haiyue Aug. 15, 2022, 1:02 a.m. UTC | #2
> -----Original Message-----
> From: David Hildenbrand <david@redhat.com>
> Sent: Monday, August 15, 2022 00:31
> To: Wang, Haiyue <haiyue.wang@intel.com>; linux-mm@kvack.org; linux-kernel@vger.kernel.org
> Cc: akpm@linux-foundation.org; linmiaohe@huawei.com; Huang, Ying <ying.huang@intel.com>;
> songmuchun@bytedance.com; naoya.horiguchi@linux.dev; alex.sierra@amd.com
> Subject: Re: [PATCH v2 1/3] mm: revert handling Non-LRU pages returned by follow_page
> 
> On 14.08.22 16:05, Haiyue Wang wrote:
> > The commit
> > 3218f8712d6b ("mm: handling Non-LRU pages returned by vm_normal_pages")
> > doesn't handle the follow_page with flag FOLL_GET correctly, this will
> > do get_page on page, it shouldn't just return directly without put_page.
> >
> > So revert the related fix to prepare for clean patch to handle Non-LRU
> > pages returned by follow_page.
> 
> What? Why?
> 

Just as the cover letter said, for applying the PATCH 2/3 can be applied on
Linux-5.19 branch directly. I will drop this kind of fix, and fix the issue
directly in v3.

> Just fix it.
> 
> --
> Thanks,
> 
> David / dhildenb
diff mbox series

Patch

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 8a7c1b344abe..2ee6d38a1426 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2963,7 +2963,7 @@  static int split_huge_pages_pid(int pid, unsigned long vaddr_start,
 		/* FOLL_DUMP to ignore special (like zero) pages */
 		page = follow_page(vma, addr, FOLL_GET | FOLL_DUMP);
 
-		if (IS_ERR_OR_NULL(page) || is_zone_device_page(page))
+		if (IS_ERR_OR_NULL(page))
 			continue;
 
 		if (!is_transparent_hugepage(page))
diff --git a/mm/ksm.c b/mm/ksm.c
index 42ab153335a2..fe3e0a39f73a 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -475,7 +475,7 @@  static int break_ksm(struct vm_area_struct *vma, unsigned long addr)
 		cond_resched();
 		page = follow_page(vma, addr,
 				FOLL_GET | FOLL_MIGRATION | FOLL_REMOTE);
-		if (IS_ERR_OR_NULL(page) || is_zone_device_page(page))
+		if (IS_ERR_OR_NULL(page))
 			break;
 		if (PageKsm(page))
 			ret = handle_mm_fault(vma, addr,
@@ -560,7 +560,7 @@  static struct page *get_mergeable_page(struct rmap_item *rmap_item)
 		goto out;
 
 	page = follow_page(vma, addr, FOLL_GET);
-	if (IS_ERR_OR_NULL(page) || is_zone_device_page(page))
+	if (IS_ERR_OR_NULL(page))
 		goto out;
 	if (PageAnon(page)) {
 		flush_anon_page(vma, page, addr);
@@ -2308,7 +2308,7 @@  static struct rmap_item *scan_get_next_rmap_item(struct page **page)
 			if (ksm_test_exit(mm))
 				break;
 			*page = follow_page(vma, ksm_scan.address, FOLL_GET);
-			if (IS_ERR_OR_NULL(*page) || is_zone_device_page(*page)) {
+			if (IS_ERR_OR_NULL(*page)) {
 				ksm_scan.address += PAGE_SIZE;
 				cond_resched();
 				continue;
diff --git a/mm/migrate.c b/mm/migrate.c
index 6a1597c92261..3d5f0262ab60 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1672,7 +1672,7 @@  static int add_page_for_migration(struct mm_struct *mm, unsigned long addr,
 		goto out;
 
 	err = -ENOENT;
-	if (!page || is_zone_device_page(page))
+	if (!page)
 		goto out;
 
 	err = 0;
@@ -1863,7 +1863,7 @@  static void do_pages_stat_array(struct mm_struct *mm, unsigned long nr_pages,
 		if (IS_ERR(page))
 			goto set_status;
 
-		if (page && !is_zone_device_page(page)) {
+		if (page) {
 			err = page_to_nid(page);
 			put_page(page);
 		} else {