mbox series

[v3,00/24] Multifd

Message ID cover.1731773021.git.maciej.szmigiero@oracle.com (mailing list archive)
Headers show
Series Multifd | expand

Message

Maciej S. Szmigiero Nov. 17, 2024, 7:19 p.m. UTC
From: "Maciej S. Szmigiero" <maciej.szmigiero@oracle.com>

This is an updated v3 patch series of the v2 series located here:
https://lore.kernel.org/qemu-devel/cover.1724701542.git.maciej.szmigiero@oracle.com/

Changes from v2:
* Reworked the non-AIO (generic) thread pool to use Glib's GThreadPool
instead of making the current QEMU AIO thread pool generic.

* Added QEMU_VM_COMMAND MIG_CMD_SWITCHOVER_START sub-command to the
migration bit stream protocol via migration compatibility flag.
Used this new bit stream sub-command to achieve barrier between main
migration channel device state data and multifd device state data instead
of introducing save_live_complete_precopy_{begin,end} handlers for that as
the previous patch set version did,

* Added a new migration core thread pool of optional load threads and used
it to implement VFIO load thread instead of introducing load_finish handler
as the previous patch set version did.

* Made VFIO device config state load operation happen from that device load
thread instead of from (now gone) load_finish handler that did such load on
the main migration thread.
In the future this may allow pushing BQL deeper into the device config
state load operation internals and so doing more of it in parallel.

* Switched multifd_send() to using a serializing mutex for thread safety
instead of atomics as suggested by Peter since this seems to not cause
any performance regression while being simpler.

* Added two patches improving SaveVMHandlers documentation: one documenting
the BQL behavior of load SaveVMHandlers, another one explaining
{load,save}_cleanup handlers semantics.

* Added Peter's proposed patch making MultiFDSendData a struct from
https://lore.kernel.org/qemu-devel/ZuCickYhs3nf2ERC@x1n/
Other two patches from that message bring no performance benefits so they
were skipped (as discussed in that e-mail thread).

* Switched x-migration-multifd-transfer VFIO property to tri-state (On,
Off, Auto), with Auto being now the default value.
This means hat VFIO device state transfer via multifd channels is
automatically attempted in configurations that otherwise support it.
Note that in this patch set version (in contrast with the previous version)
x-migration-multifd-transfer setting is meaningful both on source AND
destination QEMU.

* Fixed a race condition with respect to the final multifd channel SYNC
packet sent by the RAM transfer code.

* Made VFIO's bytes_transferred counter atomic since it is accessed from
multiple threads (thanks Avihai for spotting it).

* Fixed an issue where VFIO device config sender QEMUFile wouldn't be
closed in some error conditions, switched to QEMUFile g_autoptr() automatic
memory management there to avoid such bugs in the future (also thanks
to Avihai for spotting the issue).

* Many, MANY small changes, like renamed functions, added review tags,
locks annotations, code formatting, split out changes into separate
commits, etc.

* Redid benchmarks.

========================================================================

Benchmark results:
These are 25th percentile of downtime results from 70-100 back-and-forth
live migrations with the same VM config (guest wasn't restarted during
these migrations).

Previous benchmarks reported the lowest downtime results ("0th percentile")
instead but these were subject to variation due to often being one of
outliers.

The used setup for bechmarking was the same as the RFC version of patch set
used.


Results with 6 multifd channels:
            4 VFs   2 VFs    1 VF
Disabled: 1900 ms  859 ms  487 ms
Enabled:  1095 ms  556 ms  366 ms 

Results with 4 VFs but varied multifd channel count:
             6 ch     8 ch    15 ch
Enabled:  1095 ms  1104 ms  1125 ms 


Important note:
4 VF benchmarks were done with commit 5504a8126115
("KVM: Dynamic sized kvm memslots array") and its revert-dependencies
reverted since this seems to improve performance in this VM config if the
multifd transfer is enabled: the downtime performance with this commit
present is 1141 ms enabled / 1730 ms disabled.

Smaller VF counts actually do seem to benefit from this commit, so it's
likely that in the future adding some kind of a memslot pre-allocation
bit stream message might make sense to avoid this downtime regression for
4 VF configs (and likely higher VF count too).

========================================================================

This series is obviously targeting post QEMU 9.2 release by now
(AFAIK called 10.0).

Will need to be changed to use hw_compat_10_0 once these become available.

========================================================================

Maciej S. Szmigiero (23):
  migration: Clarify that {load,save}_cleanup handlers can run without
    setup
  thread-pool: Remove thread_pool_submit() function
  thread-pool: Rename AIO pool functions to *_aio() and data types to
    *Aio
  thread-pool: Implement generic (non-AIO) pool support
  migration: Add MIG_CMD_SWITCHOVER_START and its load handler
  migration: Add qemu_loadvm_load_state_buffer() and its handler
  migration: Document the BQL behavior of load SaveVMHandlers
  migration: Add thread pool of optional load threads
  migration/multifd: Split packet into header and RAM data
  migration/multifd: Device state transfer support - receive side
  migration/multifd: Make multifd_send() thread safe
  migration/multifd: Add an explicit MultiFDSendData destructor
  migration/multifd: Device state transfer support - send side
  migration/multifd: Add migration_has_device_state_support()
  migration/multifd: Send final SYNC only after device state is complete
  migration: Add save_live_complete_precopy_thread handler
  vfio/migration: Don't run load cleanup if load setup didn't run
  vfio/migration: Add x-migration-multifd-transfer VFIO property
  vfio/migration: Add load_device_config_state_start trace event
  vfio/migration: Convert bytes_transferred counter to atomic
  vfio/migration: Multifd device state transfer support - receive side
  migration/qemu-file: Define g_autoptr() cleanup function for QEMUFile
  vfio/migration: Multifd device state transfer support - send side

Peter Xu (1):
  migration/multifd: Make MultiFDSendData a struct

 hw/core/machine.c                  |   2 +
 hw/vfio/migration.c                | 588 ++++++++++++++++++++++++++++-
 hw/vfio/pci.c                      |  11 +
 hw/vfio/trace-events               |  11 +-
 include/block/aio.h                |   8 +-
 include/block/thread-pool.h        |  20 +-
 include/hw/vfio/vfio-common.h      |  21 ++
 include/migration/client-options.h |   4 +
 include/migration/misc.h           |  16 +
 include/migration/register.h       |  67 +++-
 include/qemu/typedefs.h            |   5 +
 migration/colo.c                   |   3 +
 migration/meson.build              |   1 +
 migration/migration-hmp-cmds.c     |   2 +
 migration/migration.c              |   3 +
 migration/migration.h              |   2 +
 migration/multifd-device-state.c   | 193 ++++++++++
 migration/multifd-nocomp.c         |  45 ++-
 migration/multifd.c                | 228 +++++++++--
 migration/multifd.h                |  73 +++-
 migration/options.c                |   9 +
 migration/qemu-file.h              |   2 +
 migration/ram.c                    |  10 +-
 migration/savevm.c                 | 183 ++++++++-
 migration/savevm.h                 |   4 +
 migration/trace-events             |   1 +
 scripts/analyze-migration.py       |  11 +
 tests/unit/test-thread-pool.c      |   2 +-
 util/async.c                       |   6 +-
 util/thread-pool.c                 | 174 +++++++--
 util/trace-events                  |   6 +-
 31 files changed, 1586 insertions(+), 125 deletions(-)
 create mode 100644 migration/multifd-device-state.c