Message ID | 20190724232700.23327-4-rcampbell@nvidia.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm/hmm: fixes for device private page migration | expand |
Hi, [This is an automated email] This commit has been processed because it contains a "Fixes:" tag, fixing commit: a5430dda8a3a mm/migrate: support un-addressable ZONE_DEVICE page in migration. The bot has tested the following trees: v5.2.2, v5.1.19, v4.19.60, v4.14.134. v5.2.2: Build OK! v5.1.19: Build OK! v4.19.60: Build OK! v4.14.134: Failed to apply! Possible dependencies: 0f10851ea475 ("mm/mmu_notifier: avoid double notification when it is useless") NOTE: The patch will not be queued to stable trees until it is upstream. How should we proceed with this patch? -- Thanks, Sasha
diff --git a/mm/rmap.c b/mm/rmap.c index e5dfe2ae6b0d..003377e24232 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1475,7 +1475,15 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, /* * No need to invalidate here it will synchronize on * against the special swap migration pte. + * + * The assignment to subpage above was computed from a + * swap PTE which results in an invalid pointer. + * Since only PAGE_SIZE pages can currently be + * migrated, just set it to page. This will need to be + * changed when hugepage migrations to device private + * memory are supported. */ + subpage = page; goto discard; }