Message ID | 20220814140534.363348-4-haiyue.wang@intel.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | fix follow_page related issues | expand |
On 14.08.22 16:05, Haiyue Wang wrote: > Add the missed put_page handling for handling Non-LRU pages returned by > follow_page with FOLL_GET flag set. > > This is the second patch for fixing the commit > 3218f8712d6b ("mm: handling Non-LRU pages returned by vm_normal_pages") > > Signed-off-by: Haiyue Wang <haiyue.wang@intel.com> > --- > mm/huge_memory.c | 2 +- > mm/ksm.c | 10 ++++++++++ > mm/migrate.c | 6 +++++- > 3 files changed, 16 insertions(+), 2 deletions(-) > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 2ee6d38a1426..b2ba17c3dcd7 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -2966,7 +2966,7 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, > if (IS_ERR_OR_NULL(page)) > continue; > > - if (!is_transparent_hugepage(page)) > + if (is_zone_device_page(page) || !is_transparent_hugepage(page)) > goto next; > > total++; > diff --git a/mm/ksm.c b/mm/ksm.c > index fe3e0a39f73a..1360bb52ada6 100644 > --- a/mm/ksm.c > +++ b/mm/ksm.c > @@ -477,6 +477,10 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr) > FOLL_GET | FOLL_MIGRATION | FOLL_REMOTE); > if (IS_ERR_OR_NULL(page)) > break; > + if (is_zone_device_page(page)) { > + put_page(page); > + break; > + } I think we can drop this check completely. While working on patches that touch this code I realized that this check is completely useless. device pages are never PageKsm pages and there is no need to special-case here. If a zone device page could be PageKsm, then we wouldn't handle it here correctly and not break ksm. So just drop it.
> -----Original Message----- > From: David Hildenbrand <david@redhat.com> > Sent: Monday, August 15, 2022 00:34 > To: Wang, Haiyue <haiyue.wang@intel.com>; linux-mm@kvack.org; linux-kernel@vger.kernel.org > Cc: akpm@linux-foundation.org; linmiaohe@huawei.com; Huang, Ying <ying.huang@intel.com>; > songmuchun@bytedance.com; naoya.horiguchi@linux.dev; alex.sierra@amd.com > Subject: Re: [PATCH v2 3/3] mm: handling Non-LRU pages returned by follow_page > > On 14.08.22 16:05, Haiyue Wang wrote: > > Add the missed put_page handling for handling Non-LRU pages returned by > > follow_page with FOLL_GET flag set. > > > > This is the second patch for fixing the commit > > 3218f8712d6b ("mm: handling Non-LRU pages returned by vm_normal_pages") > > > > Signed-off-by: Haiyue Wang <haiyue.wang@intel.com> > > --- > > mm/huge_memory.c | 2 +- > > mm/ksm.c | 10 ++++++++++ > > mm/migrate.c | 6 +++++- > > 3 files changed, 16 insertions(+), 2 deletions(-) > > > > diff --git a/mm/ksm.c b/mm/ksm.c > > index fe3e0a39f73a..1360bb52ada6 100644 > > --- a/mm/ksm.c > > +++ b/mm/ksm.c > > @@ -477,6 +477,10 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr) > > FOLL_GET | FOLL_MIGRATION | FOLL_REMOTE); > > if (IS_ERR_OR_NULL(page)) > > break; > > + if (is_zone_device_page(page)) { > > + put_page(page); > > + break; > > + } > > I think we can drop this check completely. While working on patches that > touch this code I realized that this check is completely useless. device > pages are never PageKsm pages and there is no need to special-case here. > > If a zone device page could be PageKsm, then we wouldn't handle it here > correctly and not break ksm. > > So just drop it. > Fixed in v3. > > > -- > Thanks, > > David / dhildenb
diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 2ee6d38a1426..b2ba17c3dcd7 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2966,7 +2966,7 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, if (IS_ERR_OR_NULL(page)) continue; - if (!is_transparent_hugepage(page)) + if (is_zone_device_page(page) || !is_transparent_hugepage(page)) goto next; total++; diff --git a/mm/ksm.c b/mm/ksm.c index fe3e0a39f73a..1360bb52ada6 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -477,6 +477,10 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr) FOLL_GET | FOLL_MIGRATION | FOLL_REMOTE); if (IS_ERR_OR_NULL(page)) break; + if (is_zone_device_page(page)) { + put_page(page); + break; + } if (PageKsm(page)) ret = handle_mm_fault(vma, addr, FAULT_FLAG_WRITE | FAULT_FLAG_REMOTE, @@ -562,10 +566,13 @@ static struct page *get_mergeable_page(struct rmap_item *rmap_item) page = follow_page(vma, addr, FOLL_GET); if (IS_ERR_OR_NULL(page)) goto out; + if (is_zone_device_page(page)) + goto out_putpage; if (PageAnon(page)) { flush_anon_page(vma, page, addr); flush_dcache_page(page); } else { +out_putpage: put_page(page); out: page = NULL; @@ -2313,6 +2320,8 @@ static struct rmap_item *scan_get_next_rmap_item(struct page **page) cond_resched(); continue; } + if (is_zone_device_page(*page)) + goto next_page; if (PageAnon(*page)) { flush_anon_page(vma, *page, ksm_scan.address); flush_dcache_page(*page); @@ -2327,6 +2336,7 @@ static struct rmap_item *scan_get_next_rmap_item(struct page **page) mmap_read_unlock(mm); return rmap_item; } +next_page: put_page(*page); ksm_scan.address += PAGE_SIZE; cond_resched(); diff --git a/mm/migrate.c b/mm/migrate.c index 5d304de3950b..fee12cd2f294 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1675,6 +1675,9 @@ static int add_page_for_migration(struct mm_struct *mm, unsigned long addr, if (!page) goto out; + if (is_zone_device_page(page)) + goto out_putpage; + err = 0; if (page_to_nid(page) == node) goto out_putpage; @@ -1869,7 +1872,8 @@ static void do_pages_stat_array(struct mm_struct *mm, unsigned long nr_pages, goto set_status; if (page) { - err = page_to_nid(page); + err = !is_zone_device_page(page) ? page_to_nid(page) + : -ENOENT; if (foll_flags & FOLL_GET) put_page(page); } else {
Add the missed put_page handling for handling Non-LRU pages returned by follow_page with FOLL_GET flag set. This is the second patch for fixing the commit 3218f8712d6b ("mm: handling Non-LRU pages returned by vm_normal_pages") Signed-off-by: Haiyue Wang <haiyue.wang@intel.com> --- mm/huge_memory.c | 2 +- mm/ksm.c | 10 ++++++++++ mm/migrate.c | 6 +++++- 3 files changed, 16 insertions(+), 2 deletions(-)