From patchwork Tue Mar 27 09:10:37 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 10312561 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 1191E60212 for ; Wed, 28 Mar 2018 09:10:46 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 053C029B23 for ; Wed, 28 Mar 2018 09:10:46 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EDFC729C2C; Wed, 28 Mar 2018 09:10:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.0 required=2.0 tests=BAYES_00, DATE_IN_PAST_12_24, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C83CD29B23 for ; Wed, 28 Mar 2018 09:10:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752106AbeC1JKm (ORCPT ); Wed, 28 Mar 2018 05:10:42 -0400 Received: from mail-pl0-f65.google.com ([209.85.160.65]:45546 "EHLO mail-pl0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751341AbeC1JKg (ORCPT ); Wed, 28 Mar 2018 05:10:36 -0400 Received: by mail-pl0-f65.google.com with SMTP id n15-v6so1192772plp.12 for ; Wed, 28 Mar 2018 02:10:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=IadA7RXf8Bc1iPVuEx3SNmtkcRJ7JNpOk6YbIEwuGgY=; b=ZAAD0BPT6uqEGXKOtNyn1lrNkODhmdB7MeXdPjF1wuRf4vyR0Q2lpxTsOL1a/ExI6b x9EVbLGC6WqruN3wsKXQBzHdYCvlVIf2IqwH17IQfdwQ6VtoB7JkpYaSbPASbq60rG/N xbiqO30B/aipUxXJOjnjnGz0Mdgq9iZDmRlJKYRf3EuYEHRMUs0ByGHxkSjH7fvVIq4A AyRtbFuggC4BzZqJcOeYzxsRz+t7m4evMxkKsIC83BvVWRq8TgKUEQ0CqUQf69k+ASR9 XWrCQC+uCA8I1LY+WjOSUVK1H4gSzZijShJbojjHvTr9S2IfWL2Lc8I9IEwLvP3iypKe U6SA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=IadA7RXf8Bc1iPVuEx3SNmtkcRJ7JNpOk6YbIEwuGgY=; b=IE7adpCYIDXts+6+mG2pbvt0fWxQF6RlF/t+KS51J9Djt8e0HOtX/XPzA3g8y/4mL+ ouKyfuw9O2NX1+DlED99Y4c8xLYm4vhQQrX/vj2Rw3A/JF4KJLh3qlnJTj47GnVLve/l xPAHLIrGQpNDFqah50IYjEQvwBIApHVJUXfnNN2uJOLKuXaLzCbAr2TN2LTIBxGGpUlZ EuUA6R/OhRXtYm9dnYMt71NlwcA8QIqOKYohULqjOsPY9fLoHpfmVxAQ5XgE/hS4/KuJ t32mfvESxi4yHQTTZvzvsUM9vhL0+8oduj1/npuixHR6MqIDI3/i/e25z1adJz06m+bS 55gg== X-Gm-Message-State: AElRT7FZlI3/StbqxVUIRYWAyN9af4luSMTiNh/etDTNHekcFk9DXDkS rpaI/R0hiYX5PIV/YMeFvxc= X-Google-Smtp-Source: AIpwx48gggc5V4EdtW6uR0gdtRl4LYvhFkk9C5kyekQODYNwOv2rewOJovLigUvqEihhyo6+FMn89A== X-Received: by 2002:a17:902:b7cc:: with SMTP id v12-v6mr2977255plz.237.1522228236078; Wed, 28 Mar 2018 02:10:36 -0700 (PDT) Received: from localhost.localdomain ([203.205.141.38]) by smtp.gmail.com with ESMTPSA id q9sm5590648pgs.89.2018.03.28.02.10.33 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 28 Mar 2018 02:10:35 -0700 (PDT) From: guangrong.xiao@gmail.com X-Google-Original-From: xiaoguangrong@tencent.com To: pbonzini@redhat.com, mst@redhat.com, mtosatti@redhat.com Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org, dgilbert@redhat.com, peterx@redhat.com, jiang.biao2@zte.com.cn, wei.w.wang@intel.com, Xiao Guangrong Subject: [PATCH v2 04/10] migration: detect compression and decompression errors Date: Tue, 27 Mar 2018 17:10:37 +0800 Message-Id: <20180327091043.30220-5-xiaoguangrong@tencent.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180327091043.30220-1-xiaoguangrong@tencent.com> References: <20180327091043.30220-1-xiaoguangrong@tencent.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Xiao Guangrong Currently the page being compressed is allowed to be updated by the VM on the source QEMU, correspondingly the destination QEMU just ignores the decompression error. However, we completely miss the chance to catch real errors, then the VM is corrupted silently To make the migration more robuster, we copy the page to a buffer first to avoid it being written by VM, then detect and handle the errors of both compression and decompression errors properly Signed-off-by: Xiao Guangrong --- migration/qemu-file.c | 4 ++-- migration/ram.c | 55 +++++++++++++++++++++++++++++++++++---------------- 2 files changed, 40 insertions(+), 19 deletions(-) diff --git a/migration/qemu-file.c b/migration/qemu-file.c index e924cc23c5..a7614e8c28 100644 --- a/migration/qemu-file.c +++ b/migration/qemu-file.c @@ -710,9 +710,9 @@ ssize_t qemu_put_compression_data(QEMUFile *f, z_stream *stream, blen = qemu_compress_data(stream, f->buf + f->buf_index + sizeof(int32_t), blen, p, size); if (blen < 0) { - error_report("Compress Failed!"); - return 0; + return -1; } + qemu_put_be32(f, blen); if (f->ops->writev_buffer) { add_to_iovec(f, f->buf + f->buf_index, blen, false); diff --git a/migration/ram.c b/migration/ram.c index 6b699650ca..e85191c1cb 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -269,7 +269,10 @@ struct CompressParam { QemuCond cond; RAMBlock *block; ram_addr_t offset; + + /* internally used fields */ z_stream stream; + uint8_t *originbuf; }; typedef struct CompressParam CompressParam; @@ -278,6 +281,7 @@ struct DecompressParam { bool quit; QemuMutex mutex; QemuCond cond; + QEMUFile *file; void *des; uint8_t *compbuf; int len; @@ -302,7 +306,7 @@ static QemuMutex decomp_done_lock; static QemuCond decomp_done_cond; static int do_compress_ram_page(QEMUFile *f, z_stream *stream, RAMBlock *block, - ram_addr_t offset); + ram_addr_t offset, uint8_t *source_buf); static void *do_data_compress(void *opaque) { @@ -318,7 +322,8 @@ static void *do_data_compress(void *opaque) param->block = NULL; qemu_mutex_unlock(¶m->mutex); - do_compress_ram_page(param->file, ¶m->stream, block, offset); + do_compress_ram_page(param->file, ¶m->stream, block, offset, + param->originbuf); qemu_mutex_lock(&comp_done_lock); param->done = true; @@ -372,6 +377,7 @@ static void compress_threads_save_cleanup(void) qemu_mutex_destroy(&comp_param[i].mutex); qemu_cond_destroy(&comp_param[i].cond); deflateEnd(&comp_param[i].stream); + g_free(comp_param[i].originbuf); comp_param[i].stream.opaque = NULL; } qemu_mutex_destroy(&comp_done_lock); @@ -395,8 +401,14 @@ static int compress_threads_save_setup(void) qemu_cond_init(&comp_done_cond); qemu_mutex_init(&comp_done_lock); for (i = 0; i < thread_count; i++) { + comp_param[i].originbuf = g_try_malloc(TARGET_PAGE_SIZE); + if (!comp_param[i].originbuf) { + goto exit; + } + if (deflateInit(&comp_param[i].stream, migrate_compress_level()) != Z_OK) { + g_free(comp_param[i].originbuf); goto exit; } comp_param[i].stream.opaque = &comp_param[i]; @@ -1055,7 +1067,7 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage) } static int do_compress_ram_page(QEMUFile *f, z_stream *stream, RAMBlock *block, - ram_addr_t offset) + ram_addr_t offset, uint8_t *source_buf) { RAMState *rs = ram_state; int bytes_sent, blen; @@ -1063,7 +1075,14 @@ static int do_compress_ram_page(QEMUFile *f, z_stream *stream, RAMBlock *block, bytes_sent = save_page_header(rs, f, block, offset | RAM_SAVE_FLAG_COMPRESS_PAGE); - blen = qemu_put_compression_data(f, stream, p, TARGET_PAGE_SIZE); + + /* + * copy it to a internal buffer to avoid it being modified by VM + * so that we can catch up the error during compression and + * decompression + */ + memcpy(source_buf, p, TARGET_PAGE_SIZE); + blen = qemu_put_compression_data(f, stream, source_buf, TARGET_PAGE_SIZE); if (blen < 0) { bytes_sent = 0; qemu_file_set_error(migrate_get_current()->to_dst_file, blen); @@ -2557,7 +2576,7 @@ static void *do_data_decompress(void *opaque) DecompressParam *param = opaque; unsigned long pagesize; uint8_t *des; - int len; + int len, ret; qemu_mutex_lock(¶m->mutex); while (!param->quit) { @@ -2568,13 +2587,13 @@ static void *do_data_decompress(void *opaque) qemu_mutex_unlock(¶m->mutex); pagesize = TARGET_PAGE_SIZE; - /* qemu_uncompress_data() will return failed in some case, - * especially when the page is dirtied when doing the compression, - * it's not a problem because the dirty page will be retransferred - * and uncompress() won't break the data in other pages. - */ - qemu_uncompress_data(¶m->stream, des, pagesize, param->compbuf, - len); + + ret = qemu_uncompress_data(¶m->stream, des, pagesize, + param->compbuf, len); + if (ret < 0) { + error_report("decompress data failed"); + qemu_file_set_error(param->file, ret); + } qemu_mutex_lock(&decomp_done_lock); param->done = true; @@ -2591,12 +2610,12 @@ static void *do_data_decompress(void *opaque) return NULL; } -static void wait_for_decompress_done(void) +static int wait_for_decompress_done(QEMUFile *f) { int idx, thread_count; if (!migrate_use_compression()) { - return; + return 0; } thread_count = migrate_decompress_threads(); @@ -2607,6 +2626,7 @@ static void wait_for_decompress_done(void) } } qemu_mutex_unlock(&decomp_done_lock); + return qemu_file_get_error(f); } static void compress_threads_load_cleanup(void) @@ -2646,7 +2666,7 @@ static void compress_threads_load_cleanup(void) decomp_param = NULL; } -static int compress_threads_load_setup(void) +static int compress_threads_load_setup(QEMUFile *f) { int i, thread_count; @@ -2665,6 +2685,7 @@ static int compress_threads_load_setup(void) } decomp_param[i].stream.opaque = &decomp_param[i]; + decomp_param[i].file = f; qemu_mutex_init(&decomp_param[i].mutex); qemu_cond_init(&decomp_param[i].cond); decomp_param[i].compbuf = g_malloc0(compressBound(TARGET_PAGE_SIZE)); @@ -2719,7 +2740,7 @@ static void decompress_data_with_multi_threads(QEMUFile *f, */ static int ram_load_setup(QEMUFile *f, void *opaque) { - if (compress_threads_load_setup()) { + if (compress_threads_load_setup(f)) { return -1; } @@ -3074,7 +3095,7 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id) } } - wait_for_decompress_done(); + ret |= wait_for_decompress_done(f); rcu_read_unlock(); trace_ram_load_complete(ret, seq_iter); return ret;