Message ID | 20200224065414.36524-8-zhang.zhanghailiang@huawei.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Optimize VM's downtime while do checkpoint in COLO | expand |
On 2/24/20 12:54 AM, zhanghailiang wrote: > We can migrate some dirty pages during the gap of checkpointing, > by this way, we can reduce the amount of ram migrated during checkpointing. > > Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com> > --- > +++ b/qapi/migration.json > @@ -977,12 +977,14 @@ > # > # @vmstate-loaded: VM's state has been loaded by SVM. > # > +# @migrate-ram-background: Send some dirty pages during the gap of COLO checkpoint Missing a '(since 5.0)' tag. > +# > # Since: 2.8 > ## > { 'enum': 'COLOMessage', > 'data': [ 'checkpoint-ready', 'checkpoint-request', 'checkpoint-reply', > 'vmstate-send', 'vmstate-size', 'vmstate-received', > - 'vmstate-loaded' ] } > + 'vmstate-loaded', 'migrate-ram-background' ] } > > ## > # @COLOMode: >
> -----Original Message----- > From: Eric Blake [mailto:eblake@redhat.com] > Sent: Monday, February 24, 2020 11:19 PM > To: Zhanghailiang <zhang.zhanghailiang@huawei.com>; > qemu-devel@nongnu.org > Cc: danielcho@qnap.com; dgilbert@redhat.com; quintela@redhat.com > Subject: Re: [PATCH V2 7/8] COLO: Migrate dirty pages during the gap of > checkpointing > > On 2/24/20 12:54 AM, zhanghailiang wrote: > > We can migrate some dirty pages during the gap of checkpointing, > > by this way, we can reduce the amount of ram migrated during > checkpointing. > > > > Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com> > > --- > > > +++ b/qapi/migration.json > > @@ -977,12 +977,14 @@ > > # > > # @vmstate-loaded: VM's state has been loaded by SVM. > > # > > +# @migrate-ram-background: Send some dirty pages during the gap of > COLO checkpoint > > Missing a '(since 5.0)' tag. > OK, will add this in next version, I forgot to modify it in this version which you reminded In previous version. :( > > +# > > # Since: 2.8 > > ## > > { 'enum': 'COLOMessage', > > 'data': [ 'checkpoint-ready', 'checkpoint-request', 'checkpoint-reply', > > 'vmstate-send', 'vmstate-size', 'vmstate-received', > > - 'vmstate-loaded' ] } > > + 'vmstate-loaded', 'migrate-ram-background' ] } > > > > ## > > # @COLOMode: > > > > -- > Eric Blake, Principal Software Engineer > Red Hat, Inc. +1-919-301-3226 > Virtualization: qemu.org | libvirt.org
* zhanghailiang (zhang.zhanghailiang@huawei.com) wrote: > We can migrate some dirty pages during the gap of checkpointing, > by this way, we can reduce the amount of ram migrated during checkpointing. > > Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com> > --- > migration/colo.c | 73 ++++++++++++++++++++++++++++++++++++++++-- > migration/migration.h | 1 + > migration/trace-events | 1 + > qapi/migration.json | 4 ++- > 4 files changed, 75 insertions(+), 4 deletions(-) > > diff --git a/migration/colo.c b/migration/colo.c > index 44942c4e23..c36d94072f 100644 > --- a/migration/colo.c > +++ b/migration/colo.c > @@ -47,6 +47,13 @@ static COLOMode last_colo_mode; > > #define COLO_BUFFER_BASE_SIZE (4 * 1024 * 1024) > > +#define DEFAULT_RAM_PENDING_CHECK 1000 > + > +/* should be calculated by bandwidth and max downtime ? */ > +#define THRESHOLD_PENDING_SIZE (100 * 1024 * 1024UL) In the last version I asked to change these two values to parameters. Dave > +static int checkpoint_request; > + > bool migration_in_colo_state(void) > { > MigrationState *s = migrate_get_current(); > @@ -517,6 +524,20 @@ static void colo_compare_notify_checkpoint(Notifier *notifier, void *data) > colo_checkpoint_notify(data); > } > > +static bool colo_need_migrate_ram_background(MigrationState *s) > +{ > + uint64_t pending_size, pend_pre, pend_compat, pend_post; > + int64_t max_size = THRESHOLD_PENDING_SIZE; > + > + qemu_savevm_state_pending(s->to_dst_file, max_size, &pend_pre, > + &pend_compat, &pend_post); > + pending_size = pend_pre + pend_compat + pend_post; > + > + trace_colo_need_migrate_ram_background(pending_size); > + return (pending_size >= max_size); > +} > + > + > static void colo_process_checkpoint(MigrationState *s) > { > QIOChannelBuffer *bioc; > @@ -572,6 +593,8 @@ static void colo_process_checkpoint(MigrationState *s) > > timer_mod(s->colo_delay_timer, > current_time + s->parameters.x_checkpoint_delay); > + timer_mod(s->pending_ram_check_timer, > + current_time + DEFAULT_RAM_PENDING_CHECK); > > while (s->state == MIGRATION_STATUS_COLO) { > if (failover_get_state() != FAILOVER_STATUS_NONE) { > @@ -584,9 +607,30 @@ static void colo_process_checkpoint(MigrationState *s) > if (s->state != MIGRATION_STATUS_COLO) { > goto out; > } > - ret = colo_do_checkpoint_transaction(s, bioc, fb); > - if (ret < 0) { > - goto out; > + if (atomic_xchg(&checkpoint_request, 0)) { > + /* start a colo checkpoint */ > + ret = colo_do_checkpoint_transaction(s, bioc, fb); > + if (ret < 0) { > + goto out; > + } > + } else { > + if (colo_need_migrate_ram_background(s)) { > + colo_send_message(s->to_dst_file, > + COLO_MESSAGE_MIGRATE_RAM_BACKGROUND, > + &local_err); > + if (local_err) { > + goto out; > + } > + > + qemu_savevm_state_iterate(s->to_dst_file, false); > + qemu_put_byte(s->to_dst_file, QEMU_VM_EOF); > + ret = qemu_file_get_error(s->to_dst_file); > + if (ret < 0) { > + error_setg_errno(&local_err, -ret, > + "Failed to send dirty pages backgroud"); > + goto out; > + } > + } > } > } > > @@ -627,6 +671,8 @@ out: > colo_compare_unregister_notifier(&packets_compare_notifier); > timer_del(s->colo_delay_timer); > timer_free(s->colo_delay_timer); > + timer_del(s->pending_ram_check_timer); > + timer_free(s->pending_ram_check_timer); > qemu_sem_destroy(&s->colo_checkpoint_sem); > > /* > @@ -644,6 +690,7 @@ void colo_checkpoint_notify(void *opaque) > MigrationState *s = opaque; > int64_t next_notify_time; > > + atomic_inc(&checkpoint_request); > qemu_sem_post(&s->colo_checkpoint_sem); > s->colo_checkpoint_time = qemu_clock_get_ms(QEMU_CLOCK_HOST); > next_notify_time = s->colo_checkpoint_time + > @@ -651,6 +698,19 @@ void colo_checkpoint_notify(void *opaque) > timer_mod(s->colo_delay_timer, next_notify_time); > } > > +static void colo_pending_ram_check_notify(void *opaque) > +{ > + int64_t next_notify_time; > + MigrationState *s = opaque; > + > + if (migration_in_colo_state()) { > + next_notify_time = DEFAULT_RAM_PENDING_CHECK + > + qemu_clock_get_ms(QEMU_CLOCK_HOST); > + timer_mod(s->pending_ram_check_timer, next_notify_time); > + qemu_sem_post(&s->colo_checkpoint_sem); > + } > +} > + > void migrate_start_colo_process(MigrationState *s) > { > qemu_mutex_unlock_iothread(); > @@ -658,6 +718,8 @@ void migrate_start_colo_process(MigrationState *s) > s->colo_delay_timer = timer_new_ms(QEMU_CLOCK_HOST, > colo_checkpoint_notify, s); > > + s->pending_ram_check_timer = timer_new_ms(QEMU_CLOCK_HOST, > + colo_pending_ram_check_notify, s); > qemu_sem_init(&s->colo_exit_sem, 0); > migrate_set_state(&s->state, MIGRATION_STATUS_ACTIVE, > MIGRATION_STATUS_COLO); > @@ -806,6 +868,11 @@ static void colo_wait_handle_message(MigrationIncomingState *mis, > case COLO_MESSAGE_CHECKPOINT_REQUEST: > colo_incoming_process_checkpoint(mis, fb, bioc, errp); > break; > + case COLO_MESSAGE_MIGRATE_RAM_BACKGROUND: > + if (qemu_loadvm_state_main(mis->from_src_file, mis) < 0) { > + error_setg(errp, "Load ram background failed"); > + } > + break; > default: > error_setg(errp, "Got unknown COLO message: %d", msg); > break; > diff --git a/migration/migration.h b/migration/migration.h > index 8473ddfc88..5355259789 100644 > --- a/migration/migration.h > +++ b/migration/migration.h > @@ -219,6 +219,7 @@ struct MigrationState > QemuSemaphore colo_checkpoint_sem; > int64_t colo_checkpoint_time; > QEMUTimer *colo_delay_timer; > + QEMUTimer *pending_ram_check_timer; > > /* The first error that has occurred. > We used the mutex to be able to return the 1st error message */ > diff --git a/migration/trace-events b/migration/trace-events > index 4ab0a503d2..f2ed0c8645 100644 > --- a/migration/trace-events > +++ b/migration/trace-events > @@ -295,6 +295,7 @@ migration_tls_incoming_handshake_complete(void) "" > colo_vm_state_change(const char *old, const char *new) "Change '%s' => '%s'" > colo_send_message(const char *msg) "Send '%s' message" > colo_receive_message(const char *msg) "Receive '%s' message" > +colo_need_migrate_ram_background(uint64_t pending_size) "Pending 0x%" PRIx64 " dirty ram" > > # colo-failover.c > colo_failover_set_state(const char *new_state) "new state %s" > diff --git a/qapi/migration.json b/qapi/migration.json > index 52f3429969..73445f1978 100644 > --- a/qapi/migration.json > +++ b/qapi/migration.json > @@ -977,12 +977,14 @@ > # > # @vmstate-loaded: VM's state has been loaded by SVM. > # > +# @migrate-ram-background: Send some dirty pages during the gap of COLO checkpoint > +# > # Since: 2.8 > ## > { 'enum': 'COLOMessage', > 'data': [ 'checkpoint-ready', 'checkpoint-request', 'checkpoint-reply', > 'vmstate-send', 'vmstate-size', 'vmstate-received', > - 'vmstate-loaded' ] } > + 'vmstate-loaded', 'migrate-ram-background' ] } > > ## > # @COLOMode: > -- > 2.21.0 > > -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
diff --git a/migration/colo.c b/migration/colo.c index 44942c4e23..c36d94072f 100644 --- a/migration/colo.c +++ b/migration/colo.c @@ -47,6 +47,13 @@ static COLOMode last_colo_mode; #define COLO_BUFFER_BASE_SIZE (4 * 1024 * 1024) +#define DEFAULT_RAM_PENDING_CHECK 1000 + +/* should be calculated by bandwidth and max downtime ? */ +#define THRESHOLD_PENDING_SIZE (100 * 1024 * 1024UL) + +static int checkpoint_request; + bool migration_in_colo_state(void) { MigrationState *s = migrate_get_current(); @@ -517,6 +524,20 @@ static void colo_compare_notify_checkpoint(Notifier *notifier, void *data) colo_checkpoint_notify(data); } +static bool colo_need_migrate_ram_background(MigrationState *s) +{ + uint64_t pending_size, pend_pre, pend_compat, pend_post; + int64_t max_size = THRESHOLD_PENDING_SIZE; + + qemu_savevm_state_pending(s->to_dst_file, max_size, &pend_pre, + &pend_compat, &pend_post); + pending_size = pend_pre + pend_compat + pend_post; + + trace_colo_need_migrate_ram_background(pending_size); + return (pending_size >= max_size); +} + + static void colo_process_checkpoint(MigrationState *s) { QIOChannelBuffer *bioc; @@ -572,6 +593,8 @@ static void colo_process_checkpoint(MigrationState *s) timer_mod(s->colo_delay_timer, current_time + s->parameters.x_checkpoint_delay); + timer_mod(s->pending_ram_check_timer, + current_time + DEFAULT_RAM_PENDING_CHECK); while (s->state == MIGRATION_STATUS_COLO) { if (failover_get_state() != FAILOVER_STATUS_NONE) { @@ -584,9 +607,30 @@ static void colo_process_checkpoint(MigrationState *s) if (s->state != MIGRATION_STATUS_COLO) { goto out; } - ret = colo_do_checkpoint_transaction(s, bioc, fb); - if (ret < 0) { - goto out; + if (atomic_xchg(&checkpoint_request, 0)) { + /* start a colo checkpoint */ + ret = colo_do_checkpoint_transaction(s, bioc, fb); + if (ret < 0) { + goto out; + } + } else { + if (colo_need_migrate_ram_background(s)) { + colo_send_message(s->to_dst_file, + COLO_MESSAGE_MIGRATE_RAM_BACKGROUND, + &local_err); + if (local_err) { + goto out; + } + + qemu_savevm_state_iterate(s->to_dst_file, false); + qemu_put_byte(s->to_dst_file, QEMU_VM_EOF); + ret = qemu_file_get_error(s->to_dst_file); + if (ret < 0) { + error_setg_errno(&local_err, -ret, + "Failed to send dirty pages backgroud"); + goto out; + } + } } } @@ -627,6 +671,8 @@ out: colo_compare_unregister_notifier(&packets_compare_notifier); timer_del(s->colo_delay_timer); timer_free(s->colo_delay_timer); + timer_del(s->pending_ram_check_timer); + timer_free(s->pending_ram_check_timer); qemu_sem_destroy(&s->colo_checkpoint_sem); /* @@ -644,6 +690,7 @@ void colo_checkpoint_notify(void *opaque) MigrationState *s = opaque; int64_t next_notify_time; + atomic_inc(&checkpoint_request); qemu_sem_post(&s->colo_checkpoint_sem); s->colo_checkpoint_time = qemu_clock_get_ms(QEMU_CLOCK_HOST); next_notify_time = s->colo_checkpoint_time + @@ -651,6 +698,19 @@ void colo_checkpoint_notify(void *opaque) timer_mod(s->colo_delay_timer, next_notify_time); } +static void colo_pending_ram_check_notify(void *opaque) +{ + int64_t next_notify_time; + MigrationState *s = opaque; + + if (migration_in_colo_state()) { + next_notify_time = DEFAULT_RAM_PENDING_CHECK + + qemu_clock_get_ms(QEMU_CLOCK_HOST); + timer_mod(s->pending_ram_check_timer, next_notify_time); + qemu_sem_post(&s->colo_checkpoint_sem); + } +} + void migrate_start_colo_process(MigrationState *s) { qemu_mutex_unlock_iothread(); @@ -658,6 +718,8 @@ void migrate_start_colo_process(MigrationState *s) s->colo_delay_timer = timer_new_ms(QEMU_CLOCK_HOST, colo_checkpoint_notify, s); + s->pending_ram_check_timer = timer_new_ms(QEMU_CLOCK_HOST, + colo_pending_ram_check_notify, s); qemu_sem_init(&s->colo_exit_sem, 0); migrate_set_state(&s->state, MIGRATION_STATUS_ACTIVE, MIGRATION_STATUS_COLO); @@ -806,6 +868,11 @@ static void colo_wait_handle_message(MigrationIncomingState *mis, case COLO_MESSAGE_CHECKPOINT_REQUEST: colo_incoming_process_checkpoint(mis, fb, bioc, errp); break; + case COLO_MESSAGE_MIGRATE_RAM_BACKGROUND: + if (qemu_loadvm_state_main(mis->from_src_file, mis) < 0) { + error_setg(errp, "Load ram background failed"); + } + break; default: error_setg(errp, "Got unknown COLO message: %d", msg); break; diff --git a/migration/migration.h b/migration/migration.h index 8473ddfc88..5355259789 100644 --- a/migration/migration.h +++ b/migration/migration.h @@ -219,6 +219,7 @@ struct MigrationState QemuSemaphore colo_checkpoint_sem; int64_t colo_checkpoint_time; QEMUTimer *colo_delay_timer; + QEMUTimer *pending_ram_check_timer; /* The first error that has occurred. We used the mutex to be able to return the 1st error message */ diff --git a/migration/trace-events b/migration/trace-events index 4ab0a503d2..f2ed0c8645 100644 --- a/migration/trace-events +++ b/migration/trace-events @@ -295,6 +295,7 @@ migration_tls_incoming_handshake_complete(void) "" colo_vm_state_change(const char *old, const char *new) "Change '%s' => '%s'" colo_send_message(const char *msg) "Send '%s' message" colo_receive_message(const char *msg) "Receive '%s' message" +colo_need_migrate_ram_background(uint64_t pending_size) "Pending 0x%" PRIx64 " dirty ram" # colo-failover.c colo_failover_set_state(const char *new_state) "new state %s" diff --git a/qapi/migration.json b/qapi/migration.json index 52f3429969..73445f1978 100644 --- a/qapi/migration.json +++ b/qapi/migration.json @@ -977,12 +977,14 @@ # # @vmstate-loaded: VM's state has been loaded by SVM. # +# @migrate-ram-background: Send some dirty pages during the gap of COLO checkpoint +# # Since: 2.8 ## { 'enum': 'COLOMessage', 'data': [ 'checkpoint-ready', 'checkpoint-request', 'checkpoint-reply', 'vmstate-send', 'vmstate-size', 'vmstate-received', - 'vmstate-loaded' ] } + 'vmstate-loaded', 'migrate-ram-background' ] } ## # @COLOMode:
We can migrate some dirty pages during the gap of checkpointing, by this way, we can reduce the amount of ram migrated during checkpointing. Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com> --- migration/colo.c | 73 ++++++++++++++++++++++++++++++++++++++++-- migration/migration.h | 1 + migration/trace-events | 1 + qapi/migration.json | 4 ++- 4 files changed, 75 insertions(+), 4 deletions(-)