Message ID | 20220707190349.9778-7-alex.sierra@amd.com (mailing list archive) |
---|---|
State | Deferred, archived |
Headers | show |
Series | Add MEMORY_DEVICE_COHERENT for coherent device memory mapping | expand |
On 07.07.22 21:03, Alex Sierra wrote: > From: Alistair Popple <apopple@nvidia.com> > > migrate_vma_setup() checks that a valid vma is passed so that the page > tables can be walked to find the pfns associated with a given address > range. However in some cases the pfns are already known, such as when > migrating device coherent pages during pin_user_pages() meaning a valid > vma isn't required. As raised in my other reply, without a VMA ... it feels odd to use a "migrate_vma" API. For an internal (mm/migrate_device.c) use case it is ok I guess, but it certainly adds a bit of confusion. For example, because migrate_vma_setup() will undo ref+lock not obtained by it. I guess the interesting point is that a) Besides migrate_vma_pages() and migrate_vma_setup(), the ->vma is unused. b) migrate_vma_setup() does collect+unmap+cleanup if unmap failed. c) With our source page in our hands, we cannot be processing a hole in a VMA. Not sure if it's better. but I would a) Enforce in migrate_vma_setup() that there is a VMA. Code outside of mm/migrate_device.c shouldn't be doing some hacks like this. b) Don't call migrate_vma_setup() from migrate_device_page(), but directly migrate_vma_unmap() and add a comment. That will leave a single change to this patch (migrate_vma_pages()). But is that even required? Because .... > @@ -685,7 +685,7 @@ void migrate_vma_pages(struct migrate_vma *migrate) > continue; > } > > - if (!page) { > + if (!page && migrate->vma) { How could we ever have !page in case of migrate_device_page()? Instead, I think a VM_BUG_ON(migrate->vma); should hold and you can just simplify. > if (!(migrate->src[i] & MIGRATE_PFN_MIGRATE)) > continue; > if (!notified) {
David Hildenbrand <david@redhat.com> writes: > On 07.07.22 21:03, Alex Sierra wrote: >> From: Alistair Popple <apopple@nvidia.com> >> >> migrate_vma_setup() checks that a valid vma is passed so that the page >> tables can be walked to find the pfns associated with a given address >> range. However in some cases the pfns are already known, such as when >> migrating device coherent pages during pin_user_pages() meaning a valid >> vma isn't required. > > As raised in my other reply, without a VMA ... it feels odd to use a > "migrate_vma" API. For an internal (mm/migrate_device.c) use case it is > ok I guess, but it certainly adds a bit of confusion. For example, > because migrate_vma_setup() will undo ref+lock not obtained by it. > > I guess the interesting point is that > > a) Besides migrate_vma_pages() and migrate_vma_setup(), the ->vma is unused. > > b) migrate_vma_setup() does collect+unmap+cleanup if unmap failed. > > c) With our source page in our hands, we cannot be processing a hole in > a VMA. > > > > Not sure if it's better. but I would > > a) Enforce in migrate_vma_setup() that there is a VMA. Code outside of > mm/migrate_device.c shouldn't be doing some hacks like this. > > b) Don't call migrate_vma_setup() from migrate_device_page(), but > directly migrate_vma_unmap() and add a comment. > > > That will leave a single change to this patch (migrate_vma_pages()). But > is that even required? Because .... > >> @@ -685,7 +685,7 @@ void migrate_vma_pages(struct migrate_vma *migrate) >> continue; >> } >> >> - if (!page) { >> + if (!page && migrate->vma) { > > How could we ever have !page in case of migrate_device_page()? Oh good point. This patch was originally part of a larger series I was working on at the time but you're right - for migrate_device_page() we should never hit this case. I will respin the next patch (number 7 in this series) to include this. > Instead, I think a VM_BUG_ON(migrate->vma); should hold and you can just > simplify. > >> if (!(migrate->src[i] & MIGRATE_PFN_MIGRATE)) >> continue; >> if (!notified) {
diff --git a/mm/migrate_device.c b/mm/migrate_device.c index 18bc6483f63a..cf9668376c5a 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -486,24 +486,24 @@ int migrate_vma_setup(struct migrate_vma *args) args->start &= PAGE_MASK; args->end &= PAGE_MASK; - if (!args->vma || is_vm_hugetlb_page(args->vma) || - (args->vma->vm_flags & VM_SPECIAL) || vma_is_dax(args->vma)) - return -EINVAL; - if (nr_pages <= 0) - return -EINVAL; - if (args->start < args->vma->vm_start || - args->start >= args->vma->vm_end) - return -EINVAL; - if (args->end <= args->vma->vm_start || args->end > args->vma->vm_end) - return -EINVAL; if (!args->src || !args->dst) return -EINVAL; - - memset(args->src, 0, sizeof(*args->src) * nr_pages); - args->cpages = 0; - args->npages = 0; - - migrate_vma_collect(args); + if (args->vma) { + if (is_vm_hugetlb_page(args->vma) || + (args->vma->vm_flags & VM_SPECIAL) || vma_is_dax(args->vma)) + return -EINVAL; + if (args->start < args->vma->vm_start || + args->start >= args->vma->vm_end) + return -EINVAL; + if (args->end <= args->vma->vm_start || + args->end > args->vma->vm_end) + return -EINVAL; + memset(args->src, 0, sizeof(*args->src) * nr_pages); + args->cpages = 0; + args->npages = 0; + + migrate_vma_collect(args); + } if (args->cpages) migrate_vma_unmap(args); @@ -685,7 +685,7 @@ void migrate_vma_pages(struct migrate_vma *migrate) continue; } - if (!page) { + if (!page && migrate->vma) { if (!(migrate->src[i] & MIGRATE_PFN_MIGRATE)) continue; if (!notified) {