Message ID | 1501122974-18860-1-git-send-email-jianjay.zhou@huawei.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 27/07/2017 04:36, Jay Zhou wrote: > Qemu_savevm_state_cleanup takes about 300ms in my ram migration tests > with a 8U24G vm(20G is really occupied), the main cost comes from > KVM_SET_USER_MEMORY_REGION ioctl when mem.memory_size = 0 in > kvm_set_user_memory_region. In kmod, the main cost is > kvm_zap_obsolete_pages, which traverses the active_mmu_pages list to > zap the unsync sptes. > > It can be optimized by delaying memory_global_dirty_log_stop to the next > vm_start. > > Changes v1->v2: > - create a VMChangeStateHandler in memory.c to reduce the coupling [Paolo] > > Signed-off-by: Jay Zhou <jianjay.zhou@huawei.com> memory_vm_change_state_handler should delete the handler. Apart from that, because there is no protection against nested invocations of memory_global_dirty_log_start/stop, a little more work is needed. > --- > memory.c | 27 ++++++++++++++++++++++++++- > 1 file changed, 26 insertions(+), 1 deletion(-) > > diff --git a/memory.c b/memory.c > index a7bc70a..4c22b7e 100644 > --- a/memory.c > +++ b/memory.c > @@ -2357,8 +2357,14 @@ void memory_global_dirty_log_sync(void) > } > } > > +static VMChangeStateEntry *vmstate_change; > + > void memory_global_dirty_log_start(void) > { > + if (vmstate_change) { > + qemu_del_vm_change_state_handler(vmstate_change); This should also NULL vmstate_change, so that you can detect the case where the handler is already installed. > + } > + > global_dirty_log = true; > > MEMORY_LISTENER_CALL_GLOBAL(log_global_start, Forward); > @@ -2369,7 +2375,7 @@ void memory_global_dirty_log_start(void) > memory_region_transaction_commit(); > } > > -void memory_global_dirty_log_stop(void) > +static void memory_global_dirty_log_do_stop(void) > { > global_dirty_log = false; > > @@ -2381,6 +2387,25 @@ void memory_global_dirty_log_stop(void) > MEMORY_LISTENER_CALL_GLOBAL(log_global_stop, Reverse); > } > > +static void memory_vm_change_state_handler(void *opaque, int running, > + RunState state) > +{ > + if (running) { > + memory_global_dirty_log_do_stop(); Because this will delete the handler, it should also NULL vmstate_change. > + } > +} > + > +void memory_global_dirty_log_stop(void) > +{ > + if (!runstate_is_running()) { > + vmstate_change = qemu_add_vm_change_state_handler( > + memory_vm_change_state_handler, NULL); And this needs to exit immediately if you have an installed handler. Thanks, Paolo > + return; > + } > + > + memory_global_dirty_log_do_stop(); > +} > + > static void listener_add_address_space(MemoryListener *listener, > AddressSpace *as) > { >
Hi Paolo, On 2017/7/27 22:15, Paolo Bonzini wrote: > On 27/07/2017 04:36, Jay Zhou wrote: >> Qemu_savevm_state_cleanup takes about 300ms in my ram migration tests >> with a 8U24G vm(20G is really occupied), the main cost comes from >> KVM_SET_USER_MEMORY_REGION ioctl when mem.memory_size = 0 in >> kvm_set_user_memory_region. In kmod, the main cost is >> kvm_zap_obsolete_pages, which traverses the active_mmu_pages list to >> zap the unsync sptes. >> >> It can be optimized by delaying memory_global_dirty_log_stop to the next >> vm_start. >> >> Changes v1->v2: >> - create a VMChangeStateHandler in memory.c to reduce the coupling [Paolo] >> >> Signed-off-by: Jay Zhou <jianjay.zhou@huawei.com> > > memory_vm_change_state_handler should delete the handler. > > Apart from that, because there is no protection against nested > invocations of memory_global_dirty_log_start/stop, a little more work is > needed. Thank for your patience and pointing everything out, I will fix all of them. Jay >> --- >> memory.c | 27 ++++++++++++++++++++++++++- >> 1 file changed, 26 insertions(+), 1 deletion(-) >> >> diff --git a/memory.c b/memory.c >> index a7bc70a..4c22b7e 100644 >> --- a/memory.c >> +++ b/memory.c >> @@ -2357,8 +2357,14 @@ void memory_global_dirty_log_sync(void) >> } >> } >> >> +static VMChangeStateEntry *vmstate_change; >> + >> void memory_global_dirty_log_start(void) >> { >> + if (vmstate_change) { >> + qemu_del_vm_change_state_handler(vmstate_change); > > This should also NULL vmstate_change, so that you can detect the case > where the handler is already installed. > >> + } >> + >> global_dirty_log = true; >> >> MEMORY_LISTENER_CALL_GLOBAL(log_global_start, Forward); >> @@ -2369,7 +2375,7 @@ void memory_global_dirty_log_start(void) >> memory_region_transaction_commit(); >> } >> >> -void memory_global_dirty_log_stop(void) >> +static void memory_global_dirty_log_do_stop(void) >> { >> global_dirty_log = false; >> >> @@ -2381,6 +2387,25 @@ void memory_global_dirty_log_stop(void) >> MEMORY_LISTENER_CALL_GLOBAL(log_global_stop, Reverse); >> } >> >> +static void memory_vm_change_state_handler(void *opaque, int running, >> + RunState state) >> +{ >> + if (running) { >> + memory_global_dirty_log_do_stop(); > > Because this will delete the handler, it should also NULL vmstate_change. > >> + } >> +} >> + >> +void memory_global_dirty_log_stop(void) >> +{ >> + if (!runstate_is_running()) { >> + vmstate_change = qemu_add_vm_change_state_handler( >> + memory_vm_change_state_handler, NULL); > > And this needs to exit immediately if you have an installed handler. > > Thanks, > > Paolo > >> + return; >> + } >> + >> + memory_global_dirty_log_do_stop(); >> +} >> + >> static void listener_add_address_space(MemoryListener *listener, >> AddressSpace *as) >> { >> > > > . >
diff --git a/memory.c b/memory.c index a7bc70a..4c22b7e 100644 --- a/memory.c +++ b/memory.c @@ -2357,8 +2357,14 @@ void memory_global_dirty_log_sync(void) } } +static VMChangeStateEntry *vmstate_change; + void memory_global_dirty_log_start(void) { + if (vmstate_change) { + qemu_del_vm_change_state_handler(vmstate_change); + } + global_dirty_log = true; MEMORY_LISTENER_CALL_GLOBAL(log_global_start, Forward); @@ -2369,7 +2375,7 @@ void memory_global_dirty_log_start(void) memory_region_transaction_commit(); } -void memory_global_dirty_log_stop(void) +static void memory_global_dirty_log_do_stop(void) { global_dirty_log = false; @@ -2381,6 +2387,25 @@ void memory_global_dirty_log_stop(void) MEMORY_LISTENER_CALL_GLOBAL(log_global_stop, Reverse); } +static void memory_vm_change_state_handler(void *opaque, int running, + RunState state) +{ + if (running) { + memory_global_dirty_log_do_stop(); + } +} + +void memory_global_dirty_log_stop(void) +{ + if (!runstate_is_running()) { + vmstate_change = qemu_add_vm_change_state_handler( + memory_vm_change_state_handler, NULL); + return; + } + + memory_global_dirty_log_do_stop(); +} + static void listener_add_address_space(MemoryListener *listener, AddressSpace *as) {
Qemu_savevm_state_cleanup takes about 300ms in my ram migration tests with a 8U24G vm(20G is really occupied), the main cost comes from KVM_SET_USER_MEMORY_REGION ioctl when mem.memory_size = 0 in kvm_set_user_memory_region. In kmod, the main cost is kvm_zap_obsolete_pages, which traverses the active_mmu_pages list to zap the unsync sptes. It can be optimized by delaying memory_global_dirty_log_stop to the next vm_start. Changes v1->v2: - create a VMChangeStateHandler in memory.c to reduce the coupling [Paolo] Signed-off-by: Jay Zhou <jianjay.zhou@huawei.com> --- memory.c | 27 ++++++++++++++++++++++++++- 1 file changed, 26 insertions(+), 1 deletion(-)