Message ID | 20241009234610.27039-1-yichen.wang@bytedance.com (mailing list archive) |
---|---|
Headers | show |
Series | Use Intel DSA accelerator to offload zero page checking in multifd live migration. | expand |
Yichen Wang <yichen.wang@bytedance.com> writes: > v6 > * Rebase on top of 838fc0a8769d7cc6edfe50451ba4e3368395f5c1; > * Refactor code to have clean history on all commits; > * Add comments on DSA specific defines about how the value is picked; > * Address all comments from v5 reviews about api defines, questions, etc.; > > v5 > * Rebase on top of 39a032cea23e522268519d89bb738974bc43b6f6. > * Rename struct definitions with typedef and CamelCase names; > * Add build and runtime checks about DSA accelerator; > * Address all comments from v4 reviews about typos, licenses, comments, > error reporting, etc. > > v4 > * Rebase on top of 85b597413d4370cb168f711192eaef2eb70535ac. > * A separate "multifd zero page checking" patchset was split from this > patchset's v3 and got merged into master. v4 re-applied the rest of all > commits on top of that patchset, re-factored and re-tested. > https://lore.kernel.org/all/20240311180015.3359271-1-hao.xiang@linux.dev/ > * There are some feedback from v3 I likely overlooked. > > v3 > * Rebase on top of 7425b6277f12e82952cede1f531bfc689bf77fb1. > * Fix error/warning from checkpatch.pl > * Fix use-after-free bug when multifd-dsa-accel option is not set. > * Handle error from dsa_init and correctly propogate the error. > * Remove unnecessary call to dsa_stop. > * Detect availability of DSA feature at compile time. > * Implement a generic batch_task structure and a DSA specific one dsa_batch_task. > * Remove all exit() calls and propagate errors correctly. > * Use bytes instead of page count to configure multifd-packet-size option. > > v2 > * Rebase on top of 3e01f1147a16ca566694b97eafc941d62fa1e8d8. > * Leave Juan's changes in their original form instead of squashing them. > * Add a new commit to refactor the multifd_send_thread function to prepare for introducing the DSA offload functionality. > * Use page count to configure multifd-packet-size option. > * Don't use the FLAKY flag in DSA tests. > * Test if DSA integration test is setup correctly and skip the test if > * not. > * Fixed broken link in the previous patch cover. > > * Background: > > I posted an RFC about DSA offloading in QEMU: > https://patchew.org/QEMU/20230529182001.2232069-1-hao.xiang@bytedance.com/ > > This patchset implements the DSA offloading on zero page checking in > multifd live migration code path. > > * Overview: > > Intel Data Streaming Accelerator(DSA) is introduced in Intel's 4th generation > Xeon server, aka Sapphire Rapids. > https://cdrdv2-public.intel.com/671116/341204-intel-data-streaming-accelerator-spec.pdf > https://www.intel.com/content/www/us/en/content-details/759709/intel-data-streaming-accelerator-user-guide.html > One of the things DSA can do is to offload memory comparison workload from > CPU to DSA accelerator hardware. This patchset implements a solution to offload > QEMU's zero page checking from CPU to DSA accelerator hardware. We gain > two benefits from this change: > 1. Reduces CPU usage in multifd live migration workflow across all use > cases. > 2. Reduces migration total time in some use cases. > > * Design: > > These are the logical steps to perform DSA offloading: > 1. Configure DSA accelerators and create user space openable DSA work > queues via the idxd driver. > 2. Map DSA's work queue into a user space address space. > 3. Fill an in-memory task descriptor to describe the memory operation. > 4. Use dedicated CPU instruction _enqcmd to queue a task descriptor to > the work queue. > 5. Pull the task descriptor's completion status field until the task > completes. > 6. Check return status. > > The memory operation is now totally done by the accelerator hardware but > the new workflow introduces overheads. The overhead is the extra cost CPU > prepares and submits the task descriptors and the extra cost CPU pulls for > completion. The design is around minimizing these two overheads. > > 1. In order to reduce the overhead on task preparation and submission, > we use batch descriptors. A batch descriptor will contain N individual > zero page checking tasks where the default N is 128 (default packet size > / page size) and we can increase N by setting the packet size via a new > migration option. > 2. The multifd sender threads prepares and submits batch tasks to DSA > hardware and it waits on a synchronization object for task completion. > Whenever a DSA task is submitted, the task structure is added to a > thread safe queue. It's safe to have multiple multifd sender threads to > submit tasks concurrently. > 3. Multiple DSA hardware devices can be used. During multifd initialization, > every sender thread will be assigned a DSA device to work with. We > use a round-robin scheme to evenly distribute the work across all used > DSA devices. > 4. Use a dedicated thread dsa_completion to perform busy pulling for all > DSA task completions. The thread keeps dequeuing DSA tasks from the > thread safe queue. The thread blocks when there is no outstanding DSA > task. When pulling for completion of a DSA task, the thread uses CPU > instruction _mm_pause between the iterations of a busy loop to save some > CPU power as well as optimizing core resources for the other hypercore. > 5. DSA accelerator can encounter errors. The most popular error is a > page fault. We have tested using devices to handle page faults but > performance is bad. Right now, if DSA hits a page fault, we fallback to > use CPU to complete the rest of the work. The CPU fallback is done in > the multifd sender thread. > 6. Added a new migration option multifd-dsa-accel to set the DSA device > path. If set, the multifd workflow will leverage the DSA devices for > offloading. > 7. Added a new migration option multifd-normal-page-ratio to make > multifd live migration easier to test. Setting a normal page ratio will > make live migration recognize a zero page as a normal page and send > the entire payload over the network. If we want to send a large network > payload and analyze throughput, this option is useful. > 8. Added a new migration option multifd-packet-size. This can increase > the number of pages being zero page checked and sent over the network. > The extra synchronization between the sender threads and the dsa > completion thread is an overhead. Using a large packet size can reduce > that overhead. > > * Performance: > > We use two Intel 4th generation Xeon servers for testing. > > Architecture: x86_64 > CPU(s): 192 > Thread(s) per core: 2 > Core(s) per socket: 48 > Socket(s): 2 > NUMA node(s): 2 > Vendor ID: GenuineIntel > CPU family: 6 > Model: 143 > Model name: Intel(R) Xeon(R) Platinum 8457C > Stepping: 8 > CPU MHz: 2538.624 > CPU max MHz: 3800.0000 > CPU min MHz: 800.0000 > > We perform multifd live migration with below setup: > 1. VM has 100GB memory. > 2. Use the new migration option multifd-set-normal-page-ratio to control the total > size of the payload sent over the network. > 3. Use 8 multifd channels. > 4. Use tcp for live migration. > 4. Use CPU to perform zero page checking as the baseline. > 5. Use one DSA device to offload zero page checking to compare with the baseline. > 6. Use "perf sched record" and "perf sched timehist" to analyze CPU usage. > > A) Scenario 1: 50% (50GB) normal pages on an 100GB vm. > > CPU usage > > |---------------|---------------|---------------|---------------| > | |comm |runtime(msec) |totaltime(msec)| > |---------------|---------------|---------------|---------------| > |Baseline |live_migration |5657.58 | | > | |multifdsend_0 |3931.563 | | > | |multifdsend_1 |4405.273 | | > | |multifdsend_2 |3941.968 | | > | |multifdsend_3 |5032.975 | | > | |multifdsend_4 |4533.865 | | > | |multifdsend_5 |4530.461 | | > | |multifdsend_6 |5171.916 | | > | |multifdsend_7 |4722.769 |41922 | > |---------------|---------------|---------------|---------------| > |DSA |live_migration |6129.168 | | > | |multifdsend_0 |2954.717 | | > | |multifdsend_1 |2766.359 | | > | |multifdsend_2 |2853.519 | | > | |multifdsend_3 |2740.717 | | > | |multifdsend_4 |2824.169 | | > | |multifdsend_5 |2966.908 | | > | |multifdsend_6 |2611.137 | | > | |multifdsend_7 |3114.732 | | > | |dsa_completion |3612.564 |32568 | > |---------------|---------------|---------------|---------------| > > Baseline total runtime is calculated by adding up all multifdsend_X > and live_migration threads runtime. DSA offloading total runtime is > calculated by adding up all multifdsend_X, live_migration and > dsa_completion threads runtime. 41922 msec VS 32568 msec runtime and > that is 23% total CPU usage savings. > > Latency > |---------------|---------------|---------------|---------------|---------------|---------------| > | |total time |down time |throughput |transferred-ram|total-ram | > |---------------|---------------|---------------|---------------|---------------|---------------| > |Baseline |10343 ms |161 ms |41007.00 mbps |51583797 kb |102400520 kb | > |---------------|---------------|---------------|---------------|-------------------------------| > |DSA offload |9535 ms |135 ms |46554.40 mbps |53947545 kb |102400520 kb | > |---------------|---------------|---------------|---------------|---------------|---------------| > > Total time is 8% faster and down time is 16% faster. > > B) Scenario 2: 100% (100GB) zero pages on an 100GB vm. > > CPU usage > |---------------|---------------|---------------|---------------| > | |comm |runtime(msec) |totaltime(msec)| > |---------------|---------------|---------------|---------------| > |Baseline |live_migration |4860.718 | | > | |multifdsend_0 |748.875 | | > | |multifdsend_1 |898.498 | | > | |multifdsend_2 |787.456 | | > | |multifdsend_3 |764.537 | | > | |multifdsend_4 |785.687 | | > | |multifdsend_5 |756.941 | | > | |multifdsend_6 |774.084 | | > | |multifdsend_7 |782.900 |11154 | > |---------------|---------------|-------------------------------| > |DSA offloading |live_migration |3846.976 | | > | |multifdsend_0 |191.880 | | > | |multifdsend_1 |166.331 | | > | |multifdsend_2 |168.528 | | > | |multifdsend_3 |197.831 | | > | |multifdsend_4 |169.580 | | > | |multifdsend_5 |167.984 | | > | |multifdsend_6 |198.042 | | > | |multifdsend_7 |170.624 | | > | |dsa_completion |3428.669 |8700 | > |---------------|---------------|---------------|---------------| > > Baseline total runtime is 11154 msec and DSA offloading total runtime is > 8700 msec. That is 22% CPU savings. > > Latency > |--------------------------------------------------------------------------------------------| > | |total time |down time |throughput |transferred-ram|total-ram | > |---------------|---------------|---------------|---------------|---------------|------------| > |Baseline |4867 ms |20 ms |1.51 mbps |565 kb |102400520 kb| > |---------------|---------------|---------------|---------------|----------------------------| > |DSA offload |3888 ms |18 ms |1.89 mbps |565 kb |102400520 kb| > |---------------|---------------|---------------|---------------|---------------|------------| > > Total time 20% faster and down time 10% faster. > > * Testing: > > 1. Added unit tests for cover the added code path in dsa.c > 2. Added integration tests to cover multifd live migration using DSA > offloading. > > > Hao Xiang (11): > meson: Introduce new instruction set enqcmd to the build system. > util/dsa: Implement DSA device start and stop logic. > util/dsa: Implement DSA task enqueue and dequeue. > util/dsa: Implement DSA task asynchronous completion thread model. > util/dsa: Implement zero page checking in DSA task. > util/dsa: Implement DSA task asynchronous submission and wait for > completion. > migration/multifd: Add new migration option for multifd DSA > offloading. > migration/multifd: Enable DSA offloading in multifd sender path. > migration/multifd: Add migration option set packet size. > util/dsa: Add unit test coverage for Intel DSA task submission and > completion. > migration/multifd: Add integration tests for multifd with Intel DSA > offloading. > > Yichen Wang (1): > util/dsa: Add idxd into linux header copy list. > > hmp-commands.hx | 2 +- > include/qemu/dsa.h | 189 ++++++ > meson.build | 14 + > meson_options.txt | 2 + > migration/migration-hmp-cmds.c | 26 +- > migration/multifd-zero-page.c | 133 +++- > migration/multifd-zlib.c | 6 +- > migration/multifd-zstd.c | 6 +- > migration/multifd.c | 19 +- > migration/multifd.h | 5 + > migration/options.c | 69 ++ > migration/options.h | 2 + > qapi/migration.json | 49 +- > scripts/meson-buildoptions.sh | 3 + > scripts/update-linux-headers.sh | 2 +- > tests/qtest/migration-test.c | 80 ++- > tests/unit/meson.build | 6 + > tests/unit/test-dsa.c | 503 ++++++++++++++ > util/dsa.c | 1114 +++++++++++++++++++++++++++++++ > util/meson.build | 3 + > 20 files changed, 2204 insertions(+), 29 deletions(-) > create mode 100644 include/qemu/dsa.h > create mode 100644 tests/unit/test-dsa.c > create mode 100644 util/dsa.c Still doesn't build without DSA: qemu/include/qemu/dsa.h: In function ‘buffer_is_zero_dsa_batch_sync’: /home/fabiano/kvm/qemu/include/qemu/dsa.h:183:16: error: ‘errp’ undeclared (first use in this function); did you mean ‘errno’? error_setg(errp, "DSA accelerator is not enabled."); ^ qemu/include/qapi/error.h:318:26: note: in definition of macro ‘error_setg’ error_setg_internal((errp), __FILE__, __LINE__, __func__, \ ^~~~ qemu/include/qemu/dsa.h:183:16: note: each undeclared identifier is reported only once for each function it appears in error_setg(errp, "DSA accelerator is not enabled."); ^ qemu/include/qapi/error.h:318:26: note: in definition of macro ‘error_setg’ error_setg_internal((errp), __FILE__, __LINE__, __func__, \ ^~~~
On Wed, Oct 09, 2024 at 04:45:58PM -0700, Yichen Wang wrote: > v6 > * Rebase on top of 838fc0a8769d7cc6edfe50451ba4e3368395f5c1; > * Refactor code to have clean history on all commits; > * Add comments on DSA specific defines about how the value is picked; > * Address all comments from v5 reviews about api defines, questions, etc.; > > v5 > * Rebase on top of 39a032cea23e522268519d89bb738974bc43b6f6. > * Rename struct definitions with typedef and CamelCase names; > * Add build and runtime checks about DSA accelerator; > * Address all comments from v4 reviews about typos, licenses, comments, > error reporting, etc. > > v4 > * Rebase on top of 85b597413d4370cb168f711192eaef2eb70535ac. > * A separate "multifd zero page checking" patchset was split from this > patchset's v3 and got merged into master. v4 re-applied the rest of all > commits on top of that patchset, re-factored and re-tested. > https://lore.kernel.org/all/20240311180015.3359271-1-hao.xiang@linux.dev/ > * There are some feedback from v3 I likely overlooked. > > v3 > * Rebase on top of 7425b6277f12e82952cede1f531bfc689bf77fb1. > * Fix error/warning from checkpatch.pl > * Fix use-after-free bug when multifd-dsa-accel option is not set. > * Handle error from dsa_init and correctly propogate the error. > * Remove unnecessary call to dsa_stop. > * Detect availability of DSA feature at compile time. > * Implement a generic batch_task structure and a DSA specific one dsa_batch_task. > * Remove all exit() calls and propagate errors correctly. > * Use bytes instead of page count to configure multifd-packet-size option. > > v2 > * Rebase on top of 3e01f1147a16ca566694b97eafc941d62fa1e8d8. > * Leave Juan's changes in their original form instead of squashing them. > * Add a new commit to refactor the multifd_send_thread function to prepare for introducing the DSA offload functionality. > * Use page count to configure multifd-packet-size option. > * Don't use the FLAKY flag in DSA tests. > * Test if DSA integration test is setup correctly and skip the test if > * not. > * Fixed broken link in the previous patch cover. > > * Background: > > I posted an RFC about DSA offloading in QEMU: > https://patchew.org/QEMU/20230529182001.2232069-1-hao.xiang@bytedance.com/ > > This patchset implements the DSA offloading on zero page checking in > multifd live migration code path. > > * Overview: > > Intel Data Streaming Accelerator(DSA) is introduced in Intel's 4th generation > Xeon server, aka Sapphire Rapids. > https://cdrdv2-public.intel.com/671116/341204-intel-data-streaming-accelerator-spec.pdf > https://www.intel.com/content/www/us/en/content-details/759709/intel-data-streaming-accelerator-user-guide.html > One of the things DSA can do is to offload memory comparison workload from > CPU to DSA accelerator hardware. This patchset implements a solution to offload > QEMU's zero page checking from CPU to DSA accelerator hardware. We gain > two benefits from this change: > 1. Reduces CPU usage in multifd live migration workflow across all use > cases. > 2. Reduces migration total time in some use cases. > > * Design: > > These are the logical steps to perform DSA offloading: > 1. Configure DSA accelerators and create user space openable DSA work > queues via the idxd driver. > 2. Map DSA's work queue into a user space address space. > 3. Fill an in-memory task descriptor to describe the memory operation. > 4. Use dedicated CPU instruction _enqcmd to queue a task descriptor to > the work queue. > 5. Pull the task descriptor's completion status field until the task > completes. > 6. Check return status. > > The memory operation is now totally done by the accelerator hardware but > the new workflow introduces overheads. The overhead is the extra cost CPU > prepares and submits the task descriptors and the extra cost CPU pulls for > completion. The design is around minimizing these two overheads. > > 1. In order to reduce the overhead on task preparation and submission, > we use batch descriptors. A batch descriptor will contain N individual > zero page checking tasks where the default N is 128 (default packet size > / page size) and we can increase N by setting the packet size via a new > migration option. > 2. The multifd sender threads prepares and submits batch tasks to DSA > hardware and it waits on a synchronization object for task completion. > Whenever a DSA task is submitted, the task structure is added to a > thread safe queue. It's safe to have multiple multifd sender threads to > submit tasks concurrently. > 3. Multiple DSA hardware devices can be used. During multifd initialization, > every sender thread will be assigned a DSA device to work with. We > use a round-robin scheme to evenly distribute the work across all used > DSA devices. > 4. Use a dedicated thread dsa_completion to perform busy pulling for all > DSA task completions. The thread keeps dequeuing DSA tasks from the > thread safe queue. The thread blocks when there is no outstanding DSA > task. When pulling for completion of a DSA task, the thread uses CPU > instruction _mm_pause between the iterations of a busy loop to save some > CPU power as well as optimizing core resources for the other hypercore. > 5. DSA accelerator can encounter errors. The most popular error is a > page fault. We have tested using devices to handle page faults but > performance is bad. Right now, if DSA hits a page fault, we fallback to > use CPU to complete the rest of the work. The CPU fallback is done in > the multifd sender thread. > 6. Added a new migration option multifd-dsa-accel to set the DSA device > path. If set, the multifd workflow will leverage the DSA devices for > offloading. > 7. Added a new migration option multifd-normal-page-ratio to make > multifd live migration easier to test. Setting a normal page ratio will > make live migration recognize a zero page as a normal page and send > the entire payload over the network. If we want to send a large network > payload and analyze throughput, this option is useful. > 8. Added a new migration option multifd-packet-size. This can increase > the number of pages being zero page checked and sent over the network. > The extra synchronization between the sender threads and the dsa > completion thread is an overhead. Using a large packet size can reduce > that overhead. > > * Performance: > > We use two Intel 4th generation Xeon servers for testing. > > Architecture: x86_64 > CPU(s): 192 > Thread(s) per core: 2 > Core(s) per socket: 48 > Socket(s): 2 > NUMA node(s): 2 > Vendor ID: GenuineIntel > CPU family: 6 > Model: 143 > Model name: Intel(R) Xeon(R) Platinum 8457C > Stepping: 8 > CPU MHz: 2538.624 > CPU max MHz: 3800.0000 > CPU min MHz: 800.0000 > > We perform multifd live migration with below setup: > 1. VM has 100GB memory. > 2. Use the new migration option multifd-set-normal-page-ratio to control the total > size of the payload sent over the network. I didn't find this option. Is it removed? > 3. Use 8 multifd channels. > 4. Use tcp for live migration. > 4. Use CPU to perform zero page checking as the baseline. > 5. Use one DSA device to offload zero page checking to compare with the baseline. > 6. Use "perf sched record" and "perf sched timehist" to analyze CPU usage. > > A) Scenario 1: 50% (50GB) normal pages on an 100GB vm. > > CPU usage > > |---------------|---------------|---------------|---------------| > | |comm |runtime(msec) |totaltime(msec)| > |---------------|---------------|---------------|---------------| > |Baseline |live_migration |5657.58 | | > | |multifdsend_0 |3931.563 | | > | |multifdsend_1 |4405.273 | | > | |multifdsend_2 |3941.968 | | > | |multifdsend_3 |5032.975 | | > | |multifdsend_4 |4533.865 | | > | |multifdsend_5 |4530.461 | | > | |multifdsend_6 |5171.916 | | > | |multifdsend_7 |4722.769 |41922 | > |---------------|---------------|---------------|---------------| > |DSA |live_migration |6129.168 | | > | |multifdsend_0 |2954.717 | | > | |multifdsend_1 |2766.359 | | > | |multifdsend_2 |2853.519 | | > | |multifdsend_3 |2740.717 | | > | |multifdsend_4 |2824.169 | | > | |multifdsend_5 |2966.908 | | > | |multifdsend_6 |2611.137 | | > | |multifdsend_7 |3114.732 | | > | |dsa_completion |3612.564 |32568 | > |---------------|---------------|---------------|---------------| > > Baseline total runtime is calculated by adding up all multifdsend_X > and live_migration threads runtime. DSA offloading total runtime is > calculated by adding up all multifdsend_X, live_migration and > dsa_completion threads runtime. 41922 msec VS 32568 msec runtime and > that is 23% total CPU usage savings. > > Latency > |---------------|---------------|---------------|---------------|---------------|---------------| > | |total time |down time |throughput |transferred-ram|total-ram | > |---------------|---------------|---------------|---------------|---------------|---------------| > |Baseline |10343 ms |161 ms |41007.00 mbps |51583797 kb |102400520 kb | > |---------------|---------------|---------------|---------------|-------------------------------| > |DSA offload |9535 ms |135 ms |46554.40 mbps |53947545 kb |102400520 kb | > |---------------|---------------|---------------|---------------|---------------|---------------| > > Total time is 8% faster and down time is 16% faster. Is this average test results out of many, or one test? I wonder how the total time and downtime stablizes even across runs. > > B) Scenario 2: 100% (100GB) zero pages on an 100GB vm. > > CPU usage > |---------------|---------------|---------------|---------------| > | |comm |runtime(msec) |totaltime(msec)| > |---------------|---------------|---------------|---------------| > |Baseline |live_migration |4860.718 | | > | |multifdsend_0 |748.875 | | > | |multifdsend_1 |898.498 | | > | |multifdsend_2 |787.456 | | > | |multifdsend_3 |764.537 | | > | |multifdsend_4 |785.687 | | > | |multifdsend_5 |756.941 | | > | |multifdsend_6 |774.084 | | > | |multifdsend_7 |782.900 |11154 | > |---------------|---------------|-------------------------------| > |DSA offloading |live_migration |3846.976 | | > | |multifdsend_0 |191.880 | | > | |multifdsend_1 |166.331 | | > | |multifdsend_2 |168.528 | | > | |multifdsend_3 |197.831 | | > | |multifdsend_4 |169.580 | | > | |multifdsend_5 |167.984 | | > | |multifdsend_6 |198.042 | | > | |multifdsend_7 |170.624 | | > | |dsa_completion |3428.669 |8700 | > |---------------|---------------|---------------|---------------| > > Baseline total runtime is 11154 msec and DSA offloading total runtime is > 8700 msec. That is 22% CPU savings. > > Latency > |--------------------------------------------------------------------------------------------| > | |total time |down time |throughput |transferred-ram|total-ram | > |---------------|---------------|---------------|---------------|---------------|------------| > |Baseline |4867 ms |20 ms |1.51 mbps |565 kb |102400520 kb| > |---------------|---------------|---------------|---------------|----------------------------| > |DSA offload |3888 ms |18 ms |1.89 mbps |565 kb |102400520 kb| > |---------------|---------------|---------------|---------------|---------------|------------| > > Total time 20% faster and down time 10% faster. > > * Testing: > > 1. Added unit tests for cover the added code path in dsa.c > 2. Added integration tests to cover multifd live migration using DSA > offloading. > > > Hao Xiang (11): > meson: Introduce new instruction set enqcmd to the build system. > util/dsa: Implement DSA device start and stop logic. > util/dsa: Implement DSA task enqueue and dequeue. > util/dsa: Implement DSA task asynchronous completion thread model. > util/dsa: Implement zero page checking in DSA task. > util/dsa: Implement DSA task asynchronous submission and wait for > completion. > migration/multifd: Add new migration option for multifd DSA > offloading. > migration/multifd: Enable DSA offloading in multifd sender path. > migration/multifd: Add migration option set packet size. > util/dsa: Add unit test coverage for Intel DSA task submission and > completion. > migration/multifd: Add integration tests for multifd with Intel DSA > offloading. > > Yichen Wang (1): > util/dsa: Add idxd into linux header copy list. > > hmp-commands.hx | 2 +- > include/qemu/dsa.h | 189 ++++++ > meson.build | 14 + > meson_options.txt | 2 + > migration/migration-hmp-cmds.c | 26 +- > migration/multifd-zero-page.c | 133 +++- > migration/multifd-zlib.c | 6 +- > migration/multifd-zstd.c | 6 +- > migration/multifd.c | 19 +- > migration/multifd.h | 5 + > migration/options.c | 69 ++ > migration/options.h | 2 + > qapi/migration.json | 49 +- > scripts/meson-buildoptions.sh | 3 + > scripts/update-linux-headers.sh | 2 +- > tests/qtest/migration-test.c | 80 ++- > tests/unit/meson.build | 6 + > tests/unit/test-dsa.c | 503 ++++++++++++++ > util/dsa.c | 1114 +++++++++++++++++++++++++++++++ > util/meson.build | 3 + > 20 files changed, 2204 insertions(+), 29 deletions(-) > create mode 100644 include/qemu/dsa.h > create mode 100644 tests/unit/test-dsa.c > create mode 100644 util/dsa.c The doc update is still missing under docs/, we may need that for a final merge. Are you using this in production? How it performs in real life? What is the major issue to solve for you? Is it "zero detect eats cpu too much", or "migration too slow", or "we're doing experiment with the new hardwares, and see how it goes if we apply it on top of migrations"? There're a lot of new code added for dsa just for this optimization on zero page detection. We'd better understand the major benefits, and also whether that's applicable to other part of qemu or migration-only. I actually wonder if we're going to support enqcmd whether migration is the best starting point (rather than other places where we emulate tons of devices, and maybe some backends can speedup IOs with enqcmd in some form?).. but it's more of a pure question. Thanks,
* Peter Xu (peterx@redhat.com) wrote: > The doc update is still missing under docs/, we may need that for a final > merge. > > Are you using this in production? How it performs in real life? What is > the major issue to solve for you? Is it "zero detect eats cpu too much", > or "migration too slow", or "we're doing experiment with the new hardwares, > and see how it goes if we apply it on top of migrations"? > > There're a lot of new code added for dsa just for this optimization on zero > page detection. We'd better understand the major benefits, and also > whether that's applicable to other part of qemu or migration-only. I > actually wonder if we're going to support enqcmd whether migration is the > best starting point (rather than other places where we emulate tons of > devices, and maybe some backends can speedup IOs with enqcmd in some > form?).. but it's more of a pure question. The other thing that worries me here is that there's not much abstraction, I'm sure there's a whole bunch of offload cards that could do tricks like this; how do we avoid having this much extra code for each one? Dave > > Thanks, > > -- > Peter Xu >
On Fri, Oct 11, 2024 at 9:32 AM Peter Xu <peterx@redhat.com> wrote: > > On Wed, Oct 09, 2024 at 04:45:58PM -0700, Yichen Wang wrote: > > The doc update is still missing under docs/, we may need that for a final > merge. > I will work with Intel to prepare a doc in my next patch. > Are you using this in production? How it performs in real life? What is > the major issue to solve for you? Is it "zero detect eats cpu too much", > or "migration too slow", or "we're doing experiment with the new hardwares, > and see how it goes if we apply it on top of migrations"? > Yes, we do use it in production. Our codebase is based on an old QEMU release (5.X), so we backported the series there. The major use case is just to accelerate the live migration, and it is currently under QA scale testing. The main motivation is, we reserve 4 cores for all control plane services including QEMU. While doing 2nd-scheduling (i.e. live migration to reduce the fragmentations, and very commonly seen on cloud providers), we realize QEMU will eat a lot of CPUs which causes jitter and slowness on the control planes. Even though this is not happening too frequently, we still want it to be stable. With the help of DSA, it saves CPU while accelerates the process, so we want to use it in production. > There're a lot of new code added for dsa just for this optimization on zero > page detection. We'd better understand the major benefits, and also > whether that's applicable to other part of qemu or migration-only. I > actually wonder if we're going to support enqcmd whether migration is the > best starting point (rather than other places where we emulate tons of > devices, and maybe some backends can speedup IOs with enqcmd in some > form?).. but it's more of a pure question. > I tried to put most of the code in dsa.c and do minimum changes on all other files. Even in dsa.c, it has the abstraction for "submit task", and the implementation of "submit a buffer_zero task". I think this is the best I can think of. I am open to suggestions of how we can help to move this forward. :) > Thanks, > > -- > Peter Xu >
On Fri, Oct 11, 2024 at 7:14 AM Fabiano Rosas <farosas@suse.de> wrote: > > Yichen Wang <yichen.wang@bytedance.com> writes: > > > Still doesn't build without DSA: > > qemu/include/qemu/dsa.h: In function > ‘buffer_is_zero_dsa_batch_sync’: > /home/fabiano/kvm/qemu/include/qemu/dsa.h:183:16: error: ‘errp’ > undeclared (first use in this function); did you mean ‘errno’? > > error_setg(errp, "DSA accelerator is not enabled."); > ^ > qemu/include/qapi/error.h:318:26: note: in definition of macro ‘error_setg’ > error_setg_internal((errp), __FILE__, __LINE__, __func__, \ > ^~~~ > qemu/include/qemu/dsa.h:183:16: note: each undeclared identifier is reported only once for each function it appears in > error_setg(errp, "DSA accelerator is not enabled."); > ^ > qemu/include/qapi/error.h:318:26: note: in definition of macro ‘error_setg’ > error_setg_internal((errp), __FILE__, __LINE__, __func__, \ > ^~~~ Sorry for that, I will make sure I test both for my last version before git send-mail...
On Tue, Oct 15, 2024 at 03:02:37PM -0700, Yichen Wang wrote: > On Fri, Oct 11, 2024 at 9:32 AM Peter Xu <peterx@redhat.com> wrote: > > > > On Wed, Oct 09, 2024 at 04:45:58PM -0700, Yichen Wang wrote: > > > > The doc update is still missing under docs/, we may need that for a final > > merge. > > > > I will work with Intel to prepare a doc in my next patch. > > > Are you using this in production? How it performs in real life? What is > > the major issue to solve for you? Is it "zero detect eats cpu too much", > > or "migration too slow", or "we're doing experiment with the new hardwares, > > and see how it goes if we apply it on top of migrations"? > > > > Yes, we do use it in production. Our codebase is based on an old QEMU > release (5.X), so we backported the series there. The major use case > is just to accelerate the live migration, and it is currently under QA > scale testing. The main motivation is, we reserve 4 cores for all > control plane services including QEMU. While doing 2nd-scheduling > (i.e. live migration to reduce the fragmentations, and very commonly > seen on cloud providers), we realize QEMU will eat a lot of CPUs which > causes jitter and slowness on the control planes. Even though this is > not happening too frequently, we still want it to be stable. With the > help of DSA, it saves CPU while accelerates the process, so we want to > use it in production. Thanks. Please consider adding something like this (issues, why DSA help and how, etc.) into the doc file. > > > There're a lot of new code added for dsa just for this optimization on zero > > page detection. We'd better understand the major benefits, and also > > whether that's applicable to other part of qemu or migration-only. I > > actually wonder if we're going to support enqcmd whether migration is the > > best starting point (rather than other places where we emulate tons of > > devices, and maybe some backends can speedup IOs with enqcmd in some > > form?).. but it's more of a pure question. > > > > I tried to put most of the code in dsa.c and do minimum changes on all > other files. Even in dsa.c, it has the abstraction for "submit task", > and the implementation of "submit a buffer_zero task". I think this is > the best I can think of. I am open to suggestions of how we can help > to move this forward. :) That's ok. Though I think you ignored some of my question in the email on some parameter I never found myself in this series but got mentioned. If you plan to repost soon, please help make sure the patchset is properly tested (including builds), and the results are reflecting what was posted. Thanks,