Message ID | 20190717001446.12351-4-rcampbell@nvidia.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm/hmm: fixes for device private page migration | expand |
On 7/16/19 5:14 PM, Ralph Campbell wrote: > When migrating an anonymous private page to a ZONE_DEVICE private page, > the source page->mapping and page->index fields are copied to the > destination ZONE_DEVICE struct page and the page_mapcount() is increased. > This is so rmap_walk() can be used to unmap and migrate the page back to > system memory. However, try_to_unmap_one() computes the subpage pointer > from a swap pte which computes an invalid page pointer and a kernel panic > results such as: > > BUG: unable to handle page fault for address: ffffea1fffffffc8 > > Currently, only single pages can be migrated to device private memory so > no subpage computation is needed and it can be set to "page". > > Fixes: a5430dda8a3a1c ("mm/migrate: support un-addressable ZONE_DEVICE page in migration") > Signed-off-by: Ralph Campbell <rcampbell@nvidia.com> > Cc: "Jérôme Glisse" <jglisse@redhat.com> > Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> > Cc: Mike Kravetz <mike.kravetz@oracle.com> > Cc: Christoph Hellwig <hch@lst.de> > Cc: Jason Gunthorpe <jgg@mellanox.com> > Cc: <stable@vger.kernel.org> > Signed-off-by: Andrew Morton <akpm@linux-foundation.org> > --- > mm/rmap.c | 1 + > 1 file changed, 1 insertion(+) > > diff --git a/mm/rmap.c b/mm/rmap.c > index e5dfe2ae6b0d..ec1af8b60423 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -1476,6 +1476,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, > * No need to invalidate here it will synchronize on > * against the special swap migration pte. > */ > + subpage = page; > goto discard; > } The problem is clear, but the solution still leaves the code ever so slightly more confusing, and it was already pretty difficult to begin with. I still hold out hope for some comment documentation at least, and maybe even just removing the subpage variable (as Jerome mentioned, offline) as well. Jerome? thanks,
Hi, [This is an automated email] This commit has been processed because it contains a "Fixes:" tag, fixing commit: a5430dda8a3a mm/migrate: support un-addressable ZONE_DEVICE page in migration. The bot has tested the following trees: v5.2.1, v5.1.18, v4.19.59, v4.14.133. v5.2.1: Build OK! v5.1.18: Build OK! v4.19.59: Build OK! v4.14.133: Failed to apply! Possible dependencies: 0f10851ea475 ("mm/mmu_notifier: avoid double notification when it is useless") NOTE: The patch will not be queued to stable trees until it is upstream. How should we proceed with this patch? -- Thanks, Sasha
diff --git a/mm/rmap.c b/mm/rmap.c index e5dfe2ae6b0d..ec1af8b60423 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1476,6 +1476,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, * No need to invalidate here it will synchronize on * against the special swap migration pte. */ + subpage = page; goto discard; }