Message ID | 20180428081045.8878-1-xiaoguangrong@tencent.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Sat, Apr 28, 2018 at 04:10:45PM +0800, guangrong.xiao@gmail.com wrote: > From: Xiao Guangrong <xiaoguangrong@tencent.com> > > Fix the bug introduced by da3f56cb2e767016 (migration: remove > ram_save_compressed_page()), It should be 'return' rather than > 'res' > > Sorry for this stupid mistake :( > > Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com> Reviewed-by: Peter Xu <peterx@redhat.com> So is that only a performance degradation without this fix (since AFAIU the compressing pages will be sent twice)? Thanks, > --- > migration/ram.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/migration/ram.c b/migration/ram.c > index 01cc815410..699546cc43 100644 > --- a/migration/ram.c > +++ b/migration/ram.c > @@ -1490,7 +1490,7 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss, > * CPU resource. > */ > if (block == rs->last_sent_block && save_page_use_compression(rs)) { > - res = compress_page_with_multi_thread(rs, block, offset); > + return compress_page_with_multi_thread(rs, block, offset); > } > > return ram_save_page(rs, pss, last_stage); > -- > 2.14.3 >
* Peter Xu (peterx@redhat.com) wrote: > On Sat, Apr 28, 2018 at 04:10:45PM +0800, guangrong.xiao@gmail.com wrote: > > From: Xiao Guangrong <xiaoguangrong@tencent.com> > > > > Fix the bug introduced by da3f56cb2e767016 (migration: remove > > ram_save_compressed_page()), It should be 'return' rather than > > 'res' > > > > Sorry for this stupid mistake :( > > > > Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com> > > Reviewed-by: Peter Xu <peterx@redhat.com> > > So is that only a performance degradation without this fix (since > AFAIU the compressing pages will be sent twice)? Thanks, It might be a bit more messy than that; because I think the 'compress_page_with_multi_thread' hands it to a compression thread to compress and write, but I don't think it waits for it. So I think you'll end up with this being written at the same time as the uncompressed data is written - I'm not sure what the other end receives. (Which makes me realise I don't think I understand how the compression threads are safe with their writing data out). Anyway, I think we should get this one in quickly. Dave > > --- > > migration/ram.c | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > diff --git a/migration/ram.c b/migration/ram.c > > index 01cc815410..699546cc43 100644 > > --- a/migration/ram.c > > +++ b/migration/ram.c > > @@ -1490,7 +1490,7 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss, > > * CPU resource. > > */ > > if (block == rs->last_sent_block && save_page_use_compression(rs)) { > > - res = compress_page_with_multi_thread(rs, block, offset); > > + return compress_page_with_multi_thread(rs, block, offset); > > } > > > > return ram_save_page(rs, pss, last_stage); > > -- > > 2.14.3 > > > > -- > Peter Xu -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
On Wed, May 02, 2018 at 04:30:39PM +0100, Dr. David Alan Gilbert wrote: > * Peter Xu (peterx@redhat.com) wrote: > > On Sat, Apr 28, 2018 at 04:10:45PM +0800, guangrong.xiao@gmail.com wrote: > > > From: Xiao Guangrong <xiaoguangrong@tencent.com> > > > > > > Fix the bug introduced by da3f56cb2e767016 (migration: remove > > > ram_save_compressed_page()), It should be 'return' rather than > > > 'res' > > > > > > Sorry for this stupid mistake :( > > > > > > Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com> > > > > Reviewed-by: Peter Xu <peterx@redhat.com> > > > > So is that only a performance degradation without this fix (since > > AFAIU the compressing pages will be sent twice)? Thanks, > > It might be a bit more messy than that; because I think the > 'compress_page_with_multi_thread' hands it to a compression thread to > compress and write, but I don't think it waits for it. > So I think you'll end up with this being written at the same time as the > uncompressed data is written - I'm not sure what the other end receives. > (Which makes me realise I don't think I understand how the compression > threads are safe with their writing data out). IIUC the compression threads are not really writting stuff to sockets, but only to local RAM. Please refer to do_data_compress() - when we do do_compress_ram_page() we are writting to CompressParam.file which is only a local buffer. We flush this buffer in flush_compressed_data() with qemu_put_qemu_file(). So IMHO the pages are still serialized no matter whether it's a compressed page or normal page. Then IMHO it'll be fine, since no matter whether we receive the compressed page or normal page first, we just update that page twice. Then we have two possibilities: - If the page is further modified, then both updates won't have any effect since we'll send a new version soon; - If the page is not modified any more, then both the compressed page and normal page will have exactly the same content, then it won't matter much on which one we handled first on destination (note that we will still handle the pages sequentially on destination). > > Anyway, I think we should get this one in quickly. Yes. Regards,
On 05/02/2018 10:46 AM, Peter Xu wrote: > On Sat, Apr 28, 2018 at 04:10:45PM +0800, guangrong.xiao@gmail.com wrote: >> From: Xiao Guangrong <xiaoguangrong@tencent.com> >> >> Fix the bug introduced by da3f56cb2e767016 (migration: remove >> ram_save_compressed_page()), It should be 'return' rather than >> 'res' >> >> Sorry for this stupid mistake :( >> >> Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com> > > Reviewed-by: Peter Xu <peterx@redhat.com> > > So is that only a performance degradation without this fix (since > AFAIU the compressing pages will be sent twice)? Thanks, Yes, that's why we did not detect it in our test. :(
diff --git a/migration/ram.c b/migration/ram.c index 01cc815410..699546cc43 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -1490,7 +1490,7 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss, * CPU resource. */ if (block == rs->last_sent_block && save_page_use_compression(rs)) { - res = compress_page_with_multi_thread(rs, block, offset); + return compress_page_with_multi_thread(rs, block, offset); } return ram_save_page(rs, pss, last_stage);