Message ID | 20191125120548.13589-1-yury-kotov@yandex-team.ru (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | migration/ram: Yield periodically to the main loop | expand |
Yury Kotov <yury-kotov@yandex-team.ru> wrote: > Usually, incoming migration coroutine yields to the main loop > when it's IO-channel waits for data to receive. But there is a case > when RAM migration and data receive have the same speed: VM with huge > zeroed RAM. In this case, IO-channel won't read and thus the main loop > is stuck and for example, it doesn't respond to QMP commands. > > For this case, yield periodically, but not too often, so as not to > affect the speed of migration. Ouchhhhh As a workaround, I agree. As a final solution, I think that it is best to just move the incoming migration to its own thread, we get trouble like this from time to time :-( > > Signed-off-by: Yury Kotov <yury-kotov@yandex-team.ru> > --- > migration/ram.c | 13 ++++++++++++- > 1 file changed, 12 insertions(+), 1 deletion(-) > > diff --git a/migration/ram.c b/migration/ram.c > index 5078f94490..fed6ef4b22 100644 > --- a/migration/ram.c > +++ b/migration/ram.c > @@ -4227,7 +4227,7 @@ static void colo_flush_ram_cache(void) > */ > static int ram_load_precopy(QEMUFile *f) > { > - int flags = 0, ret = 0, invalid_flags = 0, len = 0; > + int flags = 0, ret = 0, invalid_flags = 0, len = 0, i = 0; > /* ADVISE is earlier, it shows the source has the postcopy capability on */ > bool postcopy_advised = postcopy_is_advised(); > if (!migrate_use_compression()) { > @@ -4239,6 +4239,17 @@ static int ram_load_precopy(QEMUFile *f) > void *host = NULL; > uint8_t ch; > > + /* > + * Yield periodically to let main loop run, but an iteration of > + * the main loop is expensive, so do it each some iterations > + */ > + if ((i & 32767) == 0) { > + aio_co_schedule(qemu_get_current_aio_context(), > + qemu_coroutine_self()); > + qemu_coroutine_yield(); > + } > + i++; > + > addr = qemu_get_be64(f); > flags = addr & ~TARGET_PAGE_MASK; > addr &= TARGET_PAGE_MASK;
Patchew URL: https://patchew.org/QEMU/20191125120548.13589-1-yury-kotov@yandex-team.ru/ Hi, This series failed the docker-quick@centos7 build test. Please find the testing commands and their output below. If you have Docker installed, you can probably reproduce it locally. === TEST SCRIPT BEGIN === #!/bin/bash make docker-image-centos7 V=1 NETWORK=1 time make docker-test-quick@centos7 SHOW_ENV=1 J=14 NETWORK=1 === TEST SCRIPT END === TEST iotest-qcow2: 268 Failures: 267 Failed 1 of 108 iotests make: *** [check-tests/check-block.sh] Error 1 Traceback (most recent call last): File "./tests/docker/docker.py", line 662, in <module> sys.exit(main()) --- raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['sudo', '-n', 'docker', 'run', '--label', 'com.qemu.instance.uuid=4d6b22e52a25464792ace98b6aaf3bb1', '-u', '1001', '--security-opt', 'seccomp=unconfined', '--rm', '-e', 'TARGET_LIST=', '-e', 'EXTRA_CONFIGURE_OPTS=', '-e', 'V=', '-e', 'J=14', '-e', 'DEBUG=', '-e', 'SHOW_ENV=1', '-e', 'CCACHE_DIR=/var/tmp/ccache', '-v', '/home/patchew/.cache/qemu-docker-ccache:/var/tmp/ccache:z', '-v', '/var/tmp/patchew-tester-tmp-dp6w28k7/src/docker-src.2019-11-25-07.31.09.7614:/var/tmp/qemu:z,ro', 'qemu:centos7', '/var/tmp/qemu/run', 'test-quick']' returned non-zero exit status 2. filter=--filter=label=com.qemu.instance.uuid=4d6b22e52a25464792ace98b6aaf3bb1 make[1]: *** [docker-run] Error 1 make[1]: Leaving directory `/var/tmp/patchew-tester-tmp-dp6w28k7/src' make: *** [docker-run-test-quick@centos7] Error 2 real 13m8.852s user 0m8.536s The full log is available at http://patchew.org/logs/20191125120548.13589-1-yury-kotov@yandex-team.ru/testing.docker-quick@centos7/?type=message. --- Email generated automatically by Patchew [https://patchew.org/]. Please send your feedback to patchew-devel@redhat.com
diff --git a/migration/ram.c b/migration/ram.c index 5078f94490..fed6ef4b22 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -4227,7 +4227,7 @@ static void colo_flush_ram_cache(void) */ static int ram_load_precopy(QEMUFile *f) { - int flags = 0, ret = 0, invalid_flags = 0, len = 0; + int flags = 0, ret = 0, invalid_flags = 0, len = 0, i = 0; /* ADVISE is earlier, it shows the source has the postcopy capability on */ bool postcopy_advised = postcopy_is_advised(); if (!migrate_use_compression()) { @@ -4239,6 +4239,17 @@ static int ram_load_precopy(QEMUFile *f) void *host = NULL; uint8_t ch; + /* + * Yield periodically to let main loop run, but an iteration of + * the main loop is expensive, so do it each some iterations + */ + if ((i & 32767) == 0) { + aio_co_schedule(qemu_get_current_aio_context(), + qemu_coroutine_self()); + qemu_coroutine_yield(); + } + i++; + addr = qemu_get_be64(f); flags = addr & ~TARGET_PAGE_MASK; addr &= TARGET_PAGE_MASK;
Usually, incoming migration coroutine yields to the main loop when it's IO-channel waits for data to receive. But there is a case when RAM migration and data receive have the same speed: VM with huge zeroed RAM. In this case, IO-channel won't read and thus the main loop is stuck and for example, it doesn't respond to QMP commands. For this case, yield periodically, but not too often, so as not to affect the speed of migration. Signed-off-by: Yury Kotov <yury-kotov@yandex-team.ru> --- migration/ram.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-)