mbox series

[v3,0/5] Add migration support for VFIO device

Message ID 1550611400-13703-1-git-send-email-kwankhede@nvidia.com (mailing list archive)
Headers show
Series Add migration support for VFIO device | expand

Message

Kirti Wankhede Feb. 19, 2019, 9:23 p.m. UTC
Add migration support for VFIO device

This Patch set include patches as below:
- Define KABI for VFIO device for migration support.
- Added save and restore functions for PCI configuration space
- Generic migration functionality for VFIO device.
  * This patch set adds functionality only for PCI devices, but can be
    extended to other VFIO devices.
  * Added all the basic functions required for pre-copy, stop-and-copy and
    resume phases of migration.
  * Added state change notifier and from that notifier function, VFIO
    device's state changed is conveyed to VFIO device driver.
  * During save setup phase and resume/load setup phase, migration region
    is queried and is used to read/write VFIO device data.
  * .save_live_pending and .save_live_iterate are implemented to use QEMU's
    functionality of iteration during pre-copy phase.
  * In .save_live_complete_precopy, that is in stop-and-copy phase,
    iteration to read data from VFIO device driver is implemented till pending
    bytes returned by driver are not zero.
  * Added function to get dirty pages bitmap for the pages which are used by
    driver.
- Add vfio_listerner_log_sync to mark dirty pages.
- Make VFIO PCI device migration capable. If migration region is not provided by
  driver, migration is blocked.

Below is the flow of state change for live migration where states in brackets
represent VM state, migration state and VFIO device state as:
    (VM state, MIGRATION_STATUS, VFIO_DEVICE_STATE)

Live migration save path:
        QEMU normal running state
        (RUNNING, _NONE, _RUNNING)
                        |
    migrate_init spawns migration_thread.
    (RUNNING, _SETUP, _RUNNING|_SAVING)
    Migration thread then calls each device's .save_setup()
                        |
    (RUNNING, _ACTIVE, _RUNNING|_SAVING)
    If device is active, get pending bytes by .save_live_pending()
    if pending bytes >= threshold_size,  call save_live_iterate()
    Data of VFIO device for pre-copy phase is copied.
    Iterate till pending bytes converge and are less than threshold
                        |
    migration_completion() stops vCPUs and calls .save_live_complete_precopy
    for each active  device. VFIO device is then transitioned in
     _SAVING state.
    (FINISH_MIGRATE, _DEVICE, _SAVING)
    For VFIO device, iterate in  .save_live_complete_precopy  until
    pending data is 0.
    (FINISH_MIGRATE, _DEVICE, 0)
                        |
    (FINISH_MIGRATE, _COMPLETED, 0)
    Migraton thread schedule cleanup bottom half and exit

Live migration resume path:
    Incomming migration calls .load_setup for each device
    (RESTORE_VM, _ACTIVE, 0)
                        |
    For each device, .load_state is called for that device section data
                        |
    At the end, called .load_cleanup for each device and vCPUs are started.
    (RUNNING, _NONE, _RUNNING)

Note that:
- Migration post copy is not supported.
- VFIO device driver version compatibility is not taken care in this series.

v2 -> v3:
- Removed enum of VFIO device states. Defined VFIO device state with 2 bits.
- Re-structured vfio_device_migration_info to keep it minimal and defined action
  on read and write access on its members.

v1 -> v2:
- Defined MIGRATION region type and sub-type which should be used with region
  type capability.
- Re-structured vfio_device_migration_info. This structure will be placed at 0th
  offset of migration region.
- Replaced ioctl with read/write for trapped part of migration region.
- Added both type of access support, trapped or mmapped, for data section of the
  region.
- Moved PCI device functions to pci file.
- Added iteration to get dirty page bitmap until bitmap for all requested pages
  are copied.

Thanks,
Kirti

Kirti Wankhede (5):
  VFIO KABI for migration interface
  Add save and load functions for VFIO PCI devices
  Add migration functions for VFIO devices
  Add vfio_listerner_log_sync to mark dirty pages
  Make vfio-pci device migration capable.

 hw/vfio/Makefile.objs         |   2 +-
 hw/vfio/common.c              |  31 ++
 hw/vfio/migration.c           | 714 ++++++++++++++++++++++++++++++++++++++++++
 hw/vfio/pci.c                 | 117 ++++++-
 hw/vfio/pci.h                 |  29 ++
 include/hw/vfio/vfio-common.h |  20 ++
 linux-headers/linux/vfio.h    |  65 ++++
 7 files changed, 971 insertions(+), 7 deletions(-)
 create mode 100644 hw/vfio/migration.c

Comments

Dr. David Alan Gilbert Feb. 20, 2019, 10:22 a.m. UTC | #1
* Kirti Wankhede (kwankhede@nvidia.com) wrote:
> Add migration support for VFIO device

Hi Kirti,
  Can you explain how this compares and works with Yan Zhao's
set?
  These look like two incompatible solutions to me - if that's
the case we need to take a step back and figure out how to combine
them into one.

Dave

--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
Kirti Wankhede Feb. 21, 2019, 5:25 a.m. UTC | #2
On 2/20/2019 3:52 PM, Dr. David Alan Gilbert wrote:
> * Kirti Wankhede (kwankhede@nvidia.com) wrote:
>> Add migration support for VFIO device
> 
> Hi Kirti,
>   Can you explain how this compares and works with Yan Zhao's
> set?

This patch set is incremental version of my previous patch set:
https://patchwork.ozlabs.org/cover/1000719/
This takes care of the feedbacks received on previous version.

This patch set is different than Yan Zhao's set.

Thanks,
Kirti

>   These look like two incompatible solutions to me - if that's
> the case we need to take a step back and figure out how to combine
> them into one.
> 
> Dave
> 
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>
Tian, Kevin Feb. 21, 2019, 5:52 a.m. UTC | #3
> From: Kirti Wankhede [mailto:kwankhede@nvidia.com]
> Sent: Thursday, February 21, 2019 1:25 PM
> 
> On 2/20/2019 3:52 PM, Dr. David Alan Gilbert wrote:
> > * Kirti Wankhede (kwankhede@nvidia.com) wrote:
> >> Add migration support for VFIO device
> >
> > Hi Kirti,
> >   Can you explain how this compares and works with Yan Zhao's
> > set?
> 
> This patch set is incremental version of my previous patch set:
> https://patchwork.ozlabs.org/cover/1000719/
> This takes care of the feedbacks received on previous version.
> 
> This patch set is different than Yan Zhao's set.
> 

I can help give some background about Yan's work:

There was a big gap between Kirti's last version and the overall review
comments, especially this one:
https://www.mail-archive.com/qemu-devel@nongnu.org/msg576652.html

Then there was no reply from Kirti whether she agreed with the comments
and was working on a new version.

Then we think we should jump in to keep the ball moving, based on
a fresh implementation according to recommended direction, i.e. focusing
on device state management instead of sticking to migration flow in kernel
API design.

and also more importantly we provided kernel side implementation based
on Intel GVT-g to give the whole picture of both user/kernel side changes.
That should give people a better understanding of how those new APIs
are expected to be used by Qemu, and to be implemented by vendor driver.

That is why Yan just shared her work.

Now it's great to see that Kirti is still actively working on this effort and is
also moving toward the right direction. Let's have a close look at two
implementations and then choose a cleaner one as base for future
enhancements. :-)

Thanks
Kevin
Neo Jia Feb. 21, 2019, 7:10 a.m. UTC | #4
On Thu, Feb 21, 2019 at 05:52:53AM +0000, Tian, Kevin wrote:
> > From: Kirti Wankhede [mailto:kwankhede@nvidia.com]
> > Sent: Thursday, February 21, 2019 1:25 PM
> > 
> > On 2/20/2019 3:52 PM, Dr. David Alan Gilbert wrote:
> > > * Kirti Wankhede (kwankhede@nvidia.com) wrote:
> > >> Add migration support for VFIO device
> > >
> > > Hi Kirti,
> > >   Can you explain how this compares and works with Yan Zhao's
> > > set?
> > 
> > This patch set is incremental version of my previous patch set:
> > https://patchwork.ozlabs.org/cover/1000719/
> > This takes care of the feedbacks received on previous version.
> > 
> > This patch set is different than Yan Zhao's set.
> > 
> 
> I can help give some background about Yan's work:
> 
> There was a big gap between Kirti's last version and the overall review
> comments, especially this one:
> https://www.mail-archive.com/qemu-devel@nongnu.org/msg576652.html

Hi Kevin,

> 
> Then there was no reply from Kirti whether she agreed with the comments
> and was working on a new version.

Sorry, we should ack on those comments when we have received them last time.

> 
> Then we think we should jump in to keep the ball moving, based on
> a fresh implementation according to recommended direction, i.e. focusing
> on device state management instead of sticking to migration flow in kernel
> API design.
> 
> and also more importantly we provided kernel side implementation based
> on Intel GVT-g to give the whole picture of both user/kernel side changes.
> That should give people a better understanding of how those new APIs
> are expected to be used by Qemu, and to be implemented by vendor driver.
> 
> That is why Yan just shared her work.

Really glad to see the v2 version works for you guys, appreciate for the driver
side changes.

> 
> Now it's great to see that Kirti is still actively working on this effort and is
> also moving toward the right direction. Let's have a close look at two
> implementations and then choose a cleaner one as base for future
> enhancements. :-)

Yes, the v3 has addressed all the comments / concerns raised in the v2, I think
we should take a look and keep moving.

Just a quick thought - would be possible / better to have Kirti focus on the QEMU 
patches and Yan take care GVT-g kernel driver side changes? This will give us
the best testing coverage. Hope I don't step on anybody's toes here. ;-)

Thanks,
Neo

> 
> Thanks
> Kevin
Tian, Kevin Feb. 27, 2019, 1:51 a.m. UTC | #5
> From: Neo Jia [mailto:cjia@nvidia.com]
> Sent: Thursday, February 21, 2019 3:10 PM
> 
> On Thu, Feb 21, 2019 at 05:52:53AM +0000, Tian, Kevin wrote:
> > > From: Kirti Wankhede [mailto:kwankhede@nvidia.com]
> > > Sent: Thursday, February 21, 2019 1:25 PM
> > >
> > > On 2/20/2019 3:52 PM, Dr. David Alan Gilbert wrote:
> > > > * Kirti Wankhede (kwankhede@nvidia.com) wrote:
> > > >> Add migration support for VFIO device
> > > >
> > > > Hi Kirti,
> > > >   Can you explain how this compares and works with Yan Zhao's
> > > > set?
> > >
> > > This patch set is incremental version of my previous patch set:
> > > https://patchwork.ozlabs.org/cover/1000719/
> > > This takes care of the feedbacks received on previous version.
> > >
> > > This patch set is different than Yan Zhao's set.
> > >
> >
> > I can help give some background about Yan's work:
> >
> > There was a big gap between Kirti's last version and the overall review
> > comments, especially this one:
> > https://www.mail-archive.com/qemu-devel@nongnu.org/msg576652.html
> 
> Hi Kevin,
> 
> >
> > Then there was no reply from Kirti whether she agreed with the comments
> > and was working on a new version.
> 
> Sorry, we should ack on those comments when we have received them last
> time.
> 
> >
> > Then we think we should jump in to keep the ball moving, based on
> > a fresh implementation according to recommended direction, i.e. focusing
> > on device state management instead of sticking to migration flow in kernel
> > API design.
> >
> > and also more importantly we provided kernel side implementation based
> > on Intel GVT-g to give the whole picture of both user/kernel side changes.
> > That should give people a better understanding of how those new APIs
> > are expected to be used by Qemu, and to be implemented by vendor driver.
> >
> > That is why Yan just shared her work.
> 
> Really glad to see the v2 version works for you guys, appreciate for the driver
> side changes.
> 
> >
> > Now it's great to see that Kirti is still actively working on this effort and is
> > also moving toward the right direction. Let's have a close look at two
> > implementations and then choose a cleaner one as base for future
> > enhancements. :-)
> 
> Yes, the v3 has addressed all the comments / concerns raised in the v2, I think
> we should take a look and keep moving.
> 
> Just a quick thought - would be possible / better to have Kirti focus on the
> QEMU
> patches and Yan take care GVT-g kernel driver side changes? This will give us
> the best testing coverage. Hope I don't step on anybody's toes here. ;-)
> 

That is one option to be considered. For now let's focus on closing 
kernel interface definition. I saw Alex given lots of comments on both
series. Hope we can discuss and converge on that part soon, before
moving to next version. :-)

Thanks
Kevin