Message ID | 20180807091209.13531-8-xiaoguangrong@tencent.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | migration: compression optimization | expand |
On Tue, Aug 07, 2018 at 05:12:06PM +0800, guangrong.xiao@gmail.com wrote: > From: Xiao Guangrong <xiaoguangrong@tencent.com> > > flush_compressed_data() needs to wait all compression threads to > finish their work, after that all threads are free until the > migration feeds new request to them, reducing its call can improve > the throughput and use CPU resource more effectively > > We do not need to flush all threads at the end of iteration, the > data can be kept locally until the memory block is changed or > memory migration starts over in that case we will meet a dirtied > page which may still exists in compression threads's ring > > Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com> > --- > migration/ram.c | 14 +++++++++++++- > 1 file changed, 13 insertions(+), 1 deletion(-) > > diff --git a/migration/ram.c b/migration/ram.c > index 99ecf9b315..55966bc2c1 100644 > --- a/migration/ram.c > +++ b/migration/ram.c > @@ -306,6 +306,8 @@ struct RAMState { > uint64_t iterations; > /* number of dirty bits in the bitmap */ > uint64_t migration_dirty_pages; > + /* last dirty_sync_count we have seen */ > + uint64_t dirty_sync_count_prev; > /* protects modification of the bitmap */ > QemuMutex bitmap_mutex; > /* The RAMBlock used in the last src_page_requests */ > @@ -3173,6 +3175,17 @@ static int ram_save_iterate(QEMUFile *f, void *opaque) > > ram_control_before_iterate(f, RAM_CONTROL_ROUND); > > + /* > + * if memory migration starts over, we will meet a dirtied page which > + * may still exists in compression threads's ring, so we should flush > + * the compressed data to make sure the new page is not overwritten by > + * the old one in the destination. > + */ > + if (ram_counters.dirty_sync_count != rs->dirty_sync_count_prev) { > + rs->dirty_sync_count_prev = ram_counters.dirty_sync_count; > + flush_compressed_data(rs); AFAIU this only happens when ram_save_pending() calls migration_bitmap_sync(). Could we just simply flush there? Then we can avoid that new variable. > + } > + > t0 = qemu_clock_get_ns(QEMU_CLOCK_REALTIME); > i = 0; > while ((ret = qemu_file_rate_limit(f)) == 0 || > @@ -3205,7 +3218,6 @@ static int ram_save_iterate(QEMUFile *f, void *opaque) > } > i++; > } > - flush_compressed_data(rs); > rcu_read_unlock(); > > /* > -- > 2.14.4 > Regards,
On 08/08/2018 12:52 PM, Peter Xu wrote: > On Tue, Aug 07, 2018 at 05:12:06PM +0800, guangrong.xiao@gmail.com wrote: >> From: Xiao Guangrong <xiaoguangrong@tencent.com> >> >> flush_compressed_data() needs to wait all compression threads to >> finish their work, after that all threads are free until the >> migration feeds new request to them, reducing its call can improve >> the throughput and use CPU resource more effectively >> >> We do not need to flush all threads at the end of iteration, the >> data can be kept locally until the memory block is changed or >> memory migration starts over in that case we will meet a dirtied >> page which may still exists in compression threads's ring >> >> Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com> >> --- >> migration/ram.c | 14 +++++++++++++- >> 1 file changed, 13 insertions(+), 1 deletion(-) >> >> diff --git a/migration/ram.c b/migration/ram.c >> index 99ecf9b315..55966bc2c1 100644 >> --- a/migration/ram.c >> +++ b/migration/ram.c >> @@ -306,6 +306,8 @@ struct RAMState { >> uint64_t iterations; >> /* number of dirty bits in the bitmap */ >> uint64_t migration_dirty_pages; >> + /* last dirty_sync_count we have seen */ >> + uint64_t dirty_sync_count_prev; >> /* protects modification of the bitmap */ >> QemuMutex bitmap_mutex; >> /* The RAMBlock used in the last src_page_requests */ >> @@ -3173,6 +3175,17 @@ static int ram_save_iterate(QEMUFile *f, void *opaque) >> >> ram_control_before_iterate(f, RAM_CONTROL_ROUND); >> >> + /* >> + * if memory migration starts over, we will meet a dirtied page which >> + * may still exists in compression threads's ring, so we should flush >> + * the compressed data to make sure the new page is not overwritten by >> + * the old one in the destination. >> + */ >> + if (ram_counters.dirty_sync_count != rs->dirty_sync_count_prev) { >> + rs->dirty_sync_count_prev = ram_counters.dirty_sync_count; >> + flush_compressed_data(rs); > > AFAIU this only happens when ram_save_pending() calls > migration_bitmap_sync(). Could we just simply flush there? Then we > can avoid that new variable. > Yup, that's better indeed, will do it.
diff --git a/migration/ram.c b/migration/ram.c index 99ecf9b315..55966bc2c1 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -306,6 +306,8 @@ struct RAMState { uint64_t iterations; /* number of dirty bits in the bitmap */ uint64_t migration_dirty_pages; + /* last dirty_sync_count we have seen */ + uint64_t dirty_sync_count_prev; /* protects modification of the bitmap */ QemuMutex bitmap_mutex; /* The RAMBlock used in the last src_page_requests */ @@ -3173,6 +3175,17 @@ static int ram_save_iterate(QEMUFile *f, void *opaque) ram_control_before_iterate(f, RAM_CONTROL_ROUND); + /* + * if memory migration starts over, we will meet a dirtied page which + * may still exists in compression threads's ring, so we should flush + * the compressed data to make sure the new page is not overwritten by + * the old one in the destination. + */ + if (ram_counters.dirty_sync_count != rs->dirty_sync_count_prev) { + rs->dirty_sync_count_prev = ram_counters.dirty_sync_count; + flush_compressed_data(rs); + } + t0 = qemu_clock_get_ns(QEMU_CLOCK_REALTIME); i = 0; while ((ret = qemu_file_rate_limit(f)) == 0 || @@ -3205,7 +3218,6 @@ static int ram_save_iterate(QEMUFile *f, void *opaque) } i++; } - flush_compressed_data(rs); rcu_read_unlock(); /*