Message ID | 20170828112952.22965-1-kchamart@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 08/28/2017 06:29 AM, Kashyap Chamarthy wrote: > This is the follow-up patch that was discussed[*] as part of feedback to > qemu-iotest 194. > > Changes in this patch: > > - Supply 'job-id' parameter to `drive-mirror` invocation. > > - Issue `block-job-cancel` command on the source QEMU to gracefully > complete the mirroring operation. > > - Stop the NBD server on the destination QEMU. > > - Finally, exit once the event BLOCK_JOB_COMPLETED is emitted. > > With the above, the test will also be (almost) in sync with the > procedure outlined in the document live-block-operations.rst[+] > (section: "QMP invocation for live storage migration with > ``drive-mirror`` + NBD"). > > [*] https://lists.nongnu.org/archive/html/qemu-devel/2017-08/msg04820.html > -- qemu-iotests: add 194 non-shared storage migration test > [+] https://git.qemu.org/gitweb.cgi?p=qemu.git;a=blob;f=docs/interop/live-block-operations.rst > > Signed-off-by: Kashyap Chamarthy <kchamart@redhat.com> > --- > I wonder: > - Is it worth printing the MIGRATION event state change? I think waiting for both the BLOCK_JOB_COMPLETED and MIGRATION events make sense (in other words, let's check both events in the expected order, rather than just one or the other). > - Since we're not checking on the MIGRATION event anymore, can > the migration state change events related code (that is triggerred > by setting 'migrate-set-capabilities') be simply removed? If we're going to mirror libvirt's non-shared storage migration sequence, I think we want to keep everything, rather than drop the migration half. > --- > tests/qemu-iotests/194 | 17 ++++++++++++----- > tests/qemu-iotests/194.out | 14 ++++++++------ > 2 files changed, 20 insertions(+), 11 deletions(-) > > diff --git a/tests/qemu-iotests/194 b/tests/qemu-iotests/194 > index 8028111e21bed5cf4a2e8e32dc04aa5a9ea9caca..8d746be9d0033f478f11886ee93f95b0fa55bab0 100755 > --- a/tests/qemu-iotests/194 > +++ b/tests/qemu-iotests/194 > @@ -46,16 +46,17 @@ iotests.log('Launching NBD server on destination...') > iotests.log(dest_vm.qmp('nbd-server-start', addr={'type': 'unix', 'data': {'path': nbd_sock_path}})) > iotests.log(dest_vm.qmp('nbd-server-add', device='drive0', writable=True)) > > -iotests.log('Starting drive-mirror on source...') > +iotests.log('Starting `drive-mirror` on source...') > iotests.log(source_vm.qmp( > 'drive-mirror', > device='drive0', > target='nbd+unix:///drive0?socket={0}'.format(nbd_sock_path), > sync='full', > format='raw', # always raw, the server handles the format > - mode='existing')) > + mode='existing', > + job_id='mirror-job0')) > > -iotests.log('Waiting for drive-mirror to complete...') > +iotests.log('Waiting for `drive-mirror` to complete...') So, up to here is okay, > iotests.log(source_vm.event_wait('BLOCK_JOB_READY'), > filters=[iotests.filter_qmp_event]) > > @@ -66,8 +67,14 @@ dest_vm.qmp('migrate-set-capabilities', > capabilities=[{'capability': 'events', 'state': True}]) > iotests.log(source_vm.qmp('migrate', uri='unix:{0}'.format(migration_sock_path))) > > +iotests.log('Gracefully ending the `drive-mirror` job on source...') > +iotests.log(source_vm.qmp('block-job-cancel', device='mirror-job0')) > + > +iotests.log('Stopping the NBD server on destination...') > +iotests.log(dest_vm.qmp('nbd-server-stop')) > + > while True: > - event = source_vm.event_wait('MIGRATION') > + event = source_vm.event_wait('BLOCK_JOB_COMPLETED') And this event makes sense for catching the block-job-cancel, but I think you STILL want to keep a while loop for catching migration as well.
On Mon, Aug 28, 2017 at 09:51:43AM -0500, Eric Blake wrote: > On 08/28/2017 06:29 AM, Kashyap Chamarthy wrote: > > This is the follow-up patch that was discussed[*] as part of feedback to > > qemu-iotest 194. > > > > Changes in this patch: > > > > - Supply 'job-id' parameter to `drive-mirror` invocation. > > > > - Issue `block-job-cancel` command on the source QEMU to gracefully > > complete the mirroring operation. > > > > - Stop the NBD server on the destination QEMU. > > > > - Finally, exit once the event BLOCK_JOB_COMPLETED is emitted. > > > > With the above, the test will also be (almost) in sync with the > > procedure outlined in the document live-block-operations.rst[+] > > (section: "QMP invocation for live storage migration with > > ``drive-mirror`` + NBD"). > > > > [*] https://lists.nongnu.org/archive/html/qemu-devel/2017-08/msg04820.html > > -- qemu-iotests: add 194 non-shared storage migration test > > [+] https://git.qemu.org/gitweb.cgi?p=qemu.git;a=blob;f=docs/interop/live-block-operations.rst > > > > Signed-off-by: Kashyap Chamarthy <kchamart@redhat.com> > > --- > > I wonder: > > - Is it worth printing the MIGRATION event state change? > > I think waiting for both the BLOCK_JOB_COMPLETED and MIGRATION events > make sense (in other words, let's check both events in the expected > order, rather than just one or the other). That sounds more robust, will do in the next iteration. > > - Since we're not checking on the MIGRATION event anymore, can > > the migration state change events related code (that is triggerred > > by setting 'migrate-set-capabilities') be simply removed? > > If we're going to mirror libvirt's non-shared storage migration > sequence, I think we want to keep everything, rather than drop the > migration half. Yes, noted. [...] > > -iotests.log('Starting drive-mirror on source...') > > +iotests.log('Starting `drive-mirror` on source...') > > iotests.log(source_vm.qmp( > > 'drive-mirror', > > device='drive0', > > target='nbd+unix:///drive0?socket={0}'.format(nbd_sock_path), > > sync='full', > > format='raw', # always raw, the server handles the format > > - mode='existing')) > > + mode='existing', > > + job_id='mirror-job0')) > > > > -iotests.log('Waiting for drive-mirror to complete...') > > +iotests.log('Waiting for `drive-mirror` to complete...') > > So, up to here is okay, > > > iotests.log(source_vm.event_wait('BLOCK_JOB_READY'), > > filters=[iotests.filter_qmp_event]) > > > > @@ -66,8 +67,14 @@ dest_vm.qmp('migrate-set-capabilities', > > capabilities=[{'capability': 'events', 'state': True}]) > > iotests.log(source_vm.qmp('migrate', uri='unix:{0}'.format(migration_sock_path))) > > > > +iotests.log('Gracefully ending the `drive-mirror` job on source...') > > +iotests.log(source_vm.qmp('block-job-cancel', device='mirror-job0')) > > + > > +iotests.log('Stopping the NBD server on destination...') > > +iotests.log(dest_vm.qmp('nbd-server-stop')) > > + > > while True: > > - event = source_vm.event_wait('MIGRATION') > > + event = source_vm.event_wait('BLOCK_JOB_COMPLETED') > > And this event makes sense for catching the block-job-cancel, but I > think you STILL want to keep a while loop for catching migration as well. Yes, will do. Thanks for the quick feedback. [...]
diff --git a/tests/qemu-iotests/194 b/tests/qemu-iotests/194 index 8028111e21bed5cf4a2e8e32dc04aa5a9ea9caca..8d746be9d0033f478f11886ee93f95b0fa55bab0 100755 --- a/tests/qemu-iotests/194 +++ b/tests/qemu-iotests/194 @@ -46,16 +46,17 @@ iotests.log('Launching NBD server on destination...') iotests.log(dest_vm.qmp('nbd-server-start', addr={'type': 'unix', 'data': {'path': nbd_sock_path}})) iotests.log(dest_vm.qmp('nbd-server-add', device='drive0', writable=True)) -iotests.log('Starting drive-mirror on source...') +iotests.log('Starting `drive-mirror` on source...') iotests.log(source_vm.qmp( 'drive-mirror', device='drive0', target='nbd+unix:///drive0?socket={0}'.format(nbd_sock_path), sync='full', format='raw', # always raw, the server handles the format - mode='existing')) + mode='existing', + job_id='mirror-job0')) -iotests.log('Waiting for drive-mirror to complete...') +iotests.log('Waiting for `drive-mirror` to complete...') iotests.log(source_vm.event_wait('BLOCK_JOB_READY'), filters=[iotests.filter_qmp_event]) @@ -66,8 +67,14 @@ dest_vm.qmp('migrate-set-capabilities', capabilities=[{'capability': 'events', 'state': True}]) iotests.log(source_vm.qmp('migrate', uri='unix:{0}'.format(migration_sock_path))) +iotests.log('Gracefully ending the `drive-mirror` job on source...') +iotests.log(source_vm.qmp('block-job-cancel', device='mirror-job0')) + +iotests.log('Stopping the NBD server on destination...') +iotests.log(dest_vm.qmp('nbd-server-stop')) + while True: - event = source_vm.event_wait('MIGRATION') + event = source_vm.event_wait('BLOCK_JOB_COMPLETED') iotests.log(event, filters=[iotests.filter_qmp_event]) - if event['data']['status'] in ('completed', 'failed'): + if event['event'] == 'BLOCK_JOB_COMPLETED': break diff --git a/tests/qemu-iotests/194.out b/tests/qemu-iotests/194.out index ae501fecacb706b1851cb9063ce9c9d5a28bb7ea..3a0e3a26e5342b0e3f0373623efd9d2b3ee8d2be 100644 --- a/tests/qemu-iotests/194.out +++ b/tests/qemu-iotests/194.out @@ -2,12 +2,14 @@ Launching VMs... Launching NBD server on destination... {u'return': {}} {u'return': {}} -Starting drive-mirror on source... +Starting `drive-mirror` on source... {u'return': {}} -Waiting for drive-mirror to complete... -{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data': {u'device': u'drive0', u'type': u'mirror', u'speed': 0, u'len': 1073741824, u'offset': 1073741824}, u'event': u'BLOCK_JOB_READY'} +Waiting for `drive-mirror` to complete... +{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data': {u'device': u'mirror-job0', u'type': u'mirror', u'speed': 0, u'len': 1073741824, u'offset': 1073741824}, u'event': u'BLOCK_JOB_READY'} Starting migration... {u'return': {}} -{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data': {u'status': u'setup'}, u'event': u'MIGRATION'} -{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data': {u'status': u'active'}, u'event': u'MIGRATION'} -{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data': {u'status': u'completed'}, u'event': u'MIGRATION'} +Gracefully ending the `drive-mirror` job on source... +{u'return': {}} +Stopping the NBD server on destination... +{u'return': {}} +{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data': {u'device': u'mirror-job0', u'type': u'mirror', u'speed': 0, u'len': 1073741824, u'offset': 1073741824}, u'event': u'BLOCK_JOB_COMPLETED'}
This is the follow-up patch that was discussed[*] as part of feedback to qemu-iotest 194. Changes in this patch: - Supply 'job-id' parameter to `drive-mirror` invocation. - Issue `block-job-cancel` command on the source QEMU to gracefully complete the mirroring operation. - Stop the NBD server on the destination QEMU. - Finally, exit once the event BLOCK_JOB_COMPLETED is emitted. With the above, the test will also be (almost) in sync with the procedure outlined in the document live-block-operations.rst[+] (section: "QMP invocation for live storage migration with ``drive-mirror`` + NBD"). [*] https://lists.nongnu.org/archive/html/qemu-devel/2017-08/msg04820.html -- qemu-iotests: add 194 non-shared storage migration test [+] https://git.qemu.org/gitweb.cgi?p=qemu.git;a=blob;f=docs/interop/live-block-operations.rst Signed-off-by: Kashyap Chamarthy <kchamart@redhat.com> --- I wonder: - Is it worth printing the MIGRATION event state change? - Since we're not checking on the MIGRATION event anymore, can the migration state change events related code (that is triggerred by setting 'migrate-set-capabilities') be simply removed? --- tests/qemu-iotests/194 | 17 ++++++++++++----- tests/qemu-iotests/194.out | 14 ++++++++------ 2 files changed, 20 insertions(+), 11 deletions(-)