Message ID | 20200706222347.32290-1-rcampbell@nvidia.com (mailing list archive) |
---|---|
Headers | show |
Series | mm/migrate: avoid device private invalidations | expand |
On Mon, Jul 06, 2020 at 03:23:42PM -0700, Ralph Campbell wrote: > The goal for this series is to avoid device private memory TLB > invalidations when migrating a range of addresses from system > memory to device private memory and some of those pages have already > been migrated. The approach taken is to introduce a new mmu notifier > invalidation event type and use that in the device driver to skip > invalidation callbacks from migrate_vma_setup(). The device driver is > also then expected to handle device MMU invalidations as part of the > migrate_vma_setup(), migrate_vma_pages(), migrate_vma_finalize() process. > Note that this is opt-in. A device driver can simply invalidate its MMU > in the mmu notifier callback and not handle MMU invalidations in the > migration sequence. In the kvmppc secure guest usecase, 1. We ensure that we don't issue migrate_vma() calls for pages that have already been migrated to the device side (which is actually secure memory for us that is managed by Ultravisor firmware) 2. The page table mappings on the device side (secure memory) are managed transparent to the kernel by the Ultravisor firmware. Hence I assume that no specific action would be required by the kvmppc usecase due to this patchset. In fact, we never registered for this mmu notifier events. Regards, Bharata.