From patchwork Tue Aug 21 08:10:20 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 10571173 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0394A920 for ; Tue, 21 Aug 2018 08:10:50 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EAEBA29DDD for ; Tue, 21 Aug 2018 08:10:49 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E93E229E10; Tue, 21 Aug 2018 08:10:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D14E929DD0 for ; Tue, 21 Aug 2018 08:10:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726765AbeHUL3z (ORCPT ); Tue, 21 Aug 2018 07:29:55 -0400 Received: from mail-pf1-f181.google.com ([209.85.210.181]:42907 "EHLO mail-pf1-f181.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726762AbeHUL3y (ORCPT ); Tue, 21 Aug 2018 07:29:54 -0400 Received: by mail-pf1-f181.google.com with SMTP id l9-v6so8111326pff.9 for ; Tue, 21 Aug 2018 01:10:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=YwGLbdfd5ggFOOVF9mwsYUIjjUqPj1H2KU1xPLej/xc=; b=n6uwiKOg0kXGNbW7ZenxE9m7ISpinTI9lQvRrbbZlP71r0ydrFOW9Z2ZcqC4WK/foE uQS36YrPeiGeF4/ekaemqxrmL2Nz6rte4vI0osyW/Cx24zTNHvSxRllRfVO6i20k3l4Q StZdkXirccuQQAFLnf9/eRm4kQuCv7JobXfqUUPXI+BanU72jj9Hbf+7vqCJccQc+CGT 4dTngck8ZiQ/nwS4vtv4NoAQC47EQ1I1wNaP8D2D2BEGz5iLM0dWWmnp24Vq5MmrZken LiZPssd+EV86UMjhcSjgdADEJrjB4I+pzy0jddIs+y9qQHnvwqwN7LmDQpdJgYfhWMhI ApFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=YwGLbdfd5ggFOOVF9mwsYUIjjUqPj1H2KU1xPLej/xc=; b=JJZg+83rO+3kBNzojQVwc5bDw9hJPRUtPS2yr7weHHZkAbXsj0UtpFYMTqh+NP/6/K AVEqEaGQzwpDdMEvIwPTpJX0WuhXzJUaym7SGWSK+KA8ZiHEToeDfz78FBCyZfDpnLAl cPUjbJMf+OCrCQI+du2zWLz4NnCieNcXuvXjvZujA0mgYCnNU+58BGzgq4dtWSeR4GiE h1yn1NXrRYmaSuthAlc6199lHQrIxSszq9IugOyLwHBD5uPcl3UjCd//8rHbZunsFIn+ +wLa6/igThmdxEt82nV7fQPE3lz62VJ9DPomrojGnbz0kpoCBtgUl0/cS2J3LNlM+bJa r2hQ== X-Gm-Message-State: AOUpUlFMuP3tM8UbLy6pvFEq7KGI3mvR82PeF7UoFzZegVZxoWFVhHNA EvrpNor4IfevHuMcAWZwUls= X-Google-Smtp-Source: AA+uWPxgR3IFGrmgCPnzHHlNHMiPLAPl6LmcwvwLfM1hvJtaj+2FMw/ZL7FLLzSnuj9BJuv2lrB83g== X-Received: by 2002:a63:5815:: with SMTP id m21-v6mr45563095pgb.78.1534839046439; Tue, 21 Aug 2018 01:10:46 -0700 (PDT) Received: from localhost.localdomain ([203.205.141.40]) by smtp.gmail.com with ESMTPSA id r64-v6sm20644023pfk.157.2018.08.21.01.10.43 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 21 Aug 2018 01:10:46 -0700 (PDT) From: guangrong.xiao@gmail.com X-Google-Original-From: xiaoguangrong@tencent.com To: pbonzini@redhat.com, mst@redhat.com, mtosatti@redhat.com Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org, dgilbert@redhat.com, peterx@redhat.com, wei.w.wang@intel.com, jiang.biao2@zte.com.cn, eblake@redhat.com, Xiao Guangrong Subject: [PATCH v4 01/10] migration: do not wait for free thread Date: Tue, 21 Aug 2018 16:10:20 +0800 Message-Id: <20180821081029.26121-2-xiaoguangrong@tencent.com> X-Mailer: git-send-email 2.14.4 In-Reply-To: <20180821081029.26121-1-xiaoguangrong@tencent.com> References: <20180821081029.26121-1-xiaoguangrong@tencent.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Xiao Guangrong Instead of putting the main thread to sleep state to wait for free compression thread, we can directly post it out as normal page that reduces the latency and uses CPUs more efficiently A parameter, compress-wait-thread, is introduced, it can be enabled if the user really wants the old behavior Reviewed-by: Peter Xu Signed-off-by: Xiao Guangrong Reviewed-by: Juan Quintela --- hmp.c | 8 ++++++++ migration/migration.c | 21 +++++++++++++++++++++ migration/migration.h | 1 + migration/ram.c | 45 ++++++++++++++++++++++++++------------------- qapi/migration.json | 27 ++++++++++++++++++++++----- 5 files changed, 78 insertions(+), 24 deletions(-) diff --git a/hmp.c b/hmp.c index 2aafb50e8e..47d36e3ccf 100644 --- a/hmp.c +++ b/hmp.c @@ -327,6 +327,10 @@ void hmp_info_migrate_parameters(Monitor *mon, const QDict *qdict) monitor_printf(mon, "%s: %u\n", MigrationParameter_str(MIGRATION_PARAMETER_COMPRESS_THREADS), params->compress_threads); + assert(params->has_compress_wait_thread); + monitor_printf(mon, "%s: %s\n", + MigrationParameter_str(MIGRATION_PARAMETER_COMPRESS_WAIT_THREAD), + params->compress_wait_thread ? "on" : "off"); assert(params->has_decompress_threads); monitor_printf(mon, "%s: %u\n", MigrationParameter_str(MIGRATION_PARAMETER_DECOMPRESS_THREADS), @@ -1623,6 +1627,10 @@ void hmp_migrate_set_parameter(Monitor *mon, const QDict *qdict) p->has_compress_threads = true; visit_type_int(v, param, &p->compress_threads, &err); break; + case MIGRATION_PARAMETER_COMPRESS_WAIT_THREAD: + p->has_compress_wait_thread = true; + visit_type_bool(v, param, &p->compress_wait_thread, &err); + break; case MIGRATION_PARAMETER_DECOMPRESS_THREADS: p->has_decompress_threads = true; visit_type_int(v, param, &p->decompress_threads, &err); diff --git a/migration/migration.c b/migration/migration.c index b7d9854bda..2ccaadc03d 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -671,6 +671,8 @@ MigrationParameters *qmp_query_migrate_parameters(Error **errp) params->compress_level = s->parameters.compress_level; params->has_compress_threads = true; params->compress_threads = s->parameters.compress_threads; + params->has_compress_wait_thread = true; + params->compress_wait_thread = s->parameters.compress_wait_thread; params->has_decompress_threads = true; params->decompress_threads = s->parameters.decompress_threads; params->has_cpu_throttle_initial = true; @@ -1061,6 +1063,10 @@ static void migrate_params_test_apply(MigrateSetParameters *params, dest->compress_threads = params->compress_threads; } + if (params->has_compress_wait_thread) { + dest->compress_wait_thread = params->compress_wait_thread; + } + if (params->has_decompress_threads) { dest->decompress_threads = params->decompress_threads; } @@ -1126,6 +1132,10 @@ static void migrate_params_apply(MigrateSetParameters *params, Error **errp) s->parameters.compress_threads = params->compress_threads; } + if (params->has_compress_wait_thread) { + s->parameters.compress_wait_thread = params->compress_wait_thread; + } + if (params->has_decompress_threads) { s->parameters.decompress_threads = params->decompress_threads; } @@ -1871,6 +1881,15 @@ int migrate_compress_threads(void) return s->parameters.compress_threads; } +int migrate_compress_wait_thread(void) +{ + MigrationState *s; + + s = migrate_get_current(); + + return s->parameters.compress_wait_thread; +} + int migrate_decompress_threads(void) { MigrationState *s; @@ -3131,6 +3150,8 @@ static Property migration_properties[] = { DEFINE_PROP_UINT8("x-compress-threads", MigrationState, parameters.compress_threads, DEFAULT_MIGRATE_COMPRESS_THREAD_COUNT), + DEFINE_PROP_BOOL("x-compress-wait-thread", MigrationState, + parameters.compress_wait_thread, true), DEFINE_PROP_UINT8("x-decompress-threads", MigrationState, parameters.decompress_threads, DEFAULT_MIGRATE_DECOMPRESS_THREAD_COUNT), diff --git a/migration/migration.h b/migration/migration.h index 64a7b33735..a46b9e6c8d 100644 --- a/migration/migration.h +++ b/migration/migration.h @@ -271,6 +271,7 @@ bool migrate_use_return_path(void); bool migrate_use_compression(void); int migrate_compress_level(void); int migrate_compress_threads(void); +int migrate_compress_wait_thread(void); int migrate_decompress_threads(void); bool migrate_use_events(void); bool migrate_postcopy_blocktime(void); diff --git a/migration/ram.c b/migration/ram.c index 24dea2730c..ae9e83c2b6 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -1889,30 +1889,34 @@ static int compress_page_with_multi_thread(RAMState *rs, RAMBlock *block, ram_addr_t offset) { int idx, thread_count, bytes_xmit = -1, pages = -1; + bool wait = migrate_compress_wait_thread(); thread_count = migrate_compress_threads(); qemu_mutex_lock(&comp_done_lock); - while (true) { - for (idx = 0; idx < thread_count; idx++) { - if (comp_param[idx].done) { - comp_param[idx].done = false; - bytes_xmit = qemu_put_qemu_file(rs->f, comp_param[idx].file); - qemu_mutex_lock(&comp_param[idx].mutex); - set_compress_params(&comp_param[idx], block, offset); - qemu_cond_signal(&comp_param[idx].cond); - qemu_mutex_unlock(&comp_param[idx].mutex); - pages = 1; - ram_counters.normal++; - ram_counters.transferred += bytes_xmit; - break; - } - } - if (pages > 0) { +retry: + for (idx = 0; idx < thread_count; idx++) { + if (comp_param[idx].done) { + comp_param[idx].done = false; + bytes_xmit = qemu_put_qemu_file(rs->f, comp_param[idx].file); + qemu_mutex_lock(&comp_param[idx].mutex); + set_compress_params(&comp_param[idx], block, offset); + qemu_cond_signal(&comp_param[idx].cond); + qemu_mutex_unlock(&comp_param[idx].mutex); + pages = 1; + ram_counters.normal++; + ram_counters.transferred += bytes_xmit; break; - } else { - qemu_cond_wait(&comp_done_cond, &comp_done_lock); } } + + /* + * wait for the free thread if the user specifies 'compress-wait-thread', + * otherwise we will post the page out in the main thread as normal page. + */ + if (pages < 0 && wait) { + qemu_cond_wait(&comp_done_cond, &comp_done_lock); + goto retry; + } qemu_mutex_unlock(&comp_done_lock); return pages; @@ -2226,7 +2230,10 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss, * CPU resource. */ if (block == rs->last_sent_block && save_page_use_compression(rs)) { - return compress_page_with_multi_thread(rs, block, offset); + res = compress_page_with_multi_thread(rs, block, offset); + if (res > 0) { + return res; + } } else if (migrate_use_multifd()) { return ram_save_multifd_page(rs, block, offset); } diff --git a/qapi/migration.json b/qapi/migration.json index 186e8a7303..940cb5cbd0 100644 --- a/qapi/migration.json +++ b/qapi/migration.json @@ -462,6 +462,11 @@ # @compress-threads: Set compression thread count to be used in live migration, # the compression thread count is an integer between 1 and 255. # +# @compress-wait-thread: Controls behavior when all compression threads are +# currently busy. If true (default), wait for a free +# compression thread to become available; otherwise, +# send the page uncompressed. (Since 3.1) +# # @decompress-threads: Set decompression thread count to be used in live # migration, the decompression thread count is an integer between 1 # and 255. Usually, decompression is at least 4 times as fast as @@ -526,11 +531,11 @@ # Since: 2.4 ## { 'enum': 'MigrationParameter', - 'data': ['compress-level', 'compress-threads', 'decompress-threads', - 'cpu-throttle-initial', 'cpu-throttle-increment', - 'tls-creds', 'tls-hostname', 'max-bandwidth', - 'downtime-limit', 'x-checkpoint-delay', 'block-incremental', - 'x-multifd-channels', 'x-multifd-page-count', + 'data': ['compress-level', 'compress-threads', 'compress-wait-thread', + 'decompress-threads', 'cpu-throttle-initial', + 'cpu-throttle-increment', 'tls-creds', 'tls-hostname', + 'max-bandwidth', 'downtime-limit', 'x-checkpoint-delay', + 'block-incremental', 'x-multifd-channels', 'x-multifd-page-count', 'xbzrle-cache-size', 'max-postcopy-bandwidth' ] } ## @@ -540,6 +545,11 @@ # # @compress-threads: compression thread count # +# @compress-wait-thread: Controls behavior when all compression threads are +# currently busy. If true (default), wait for a free +# compression thread to become available; otherwise, +# send the page uncompressed. (Since 3.1) +# # @decompress-threads: decompression thread count # # @cpu-throttle-initial: Initial percentage of time guest cpus are @@ -610,6 +620,7 @@ { 'struct': 'MigrateSetParameters', 'data': { '*compress-level': 'int', '*compress-threads': 'int', + '*compress-wait-thread': 'bool', '*decompress-threads': 'int', '*cpu-throttle-initial': 'int', '*cpu-throttle-increment': 'int', @@ -649,6 +660,11 @@ # # @compress-threads: compression thread count # +# @compress-wait-thread: Controls behavior when all compression threads are +# currently busy. If true (default), wait for a free +# compression thread to become available; otherwise, +# send the page uncompressed. (Since 3.1) +# # @decompress-threads: decompression thread count # # @cpu-throttle-initial: Initial percentage of time guest cpus are @@ -714,6 +730,7 @@ { 'struct': 'MigrationParameters', 'data': { '*compress-level': 'uint8', '*compress-threads': 'uint8', + '*compress-wait-thread': 'bool', '*decompress-threads': 'uint8', '*cpu-throttle-initial': 'uint8', '*cpu-throttle-increment': 'uint8', From patchwork Tue Aug 21 08:10:21 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 10571175 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C640A1390 for ; Tue, 21 Aug 2018 08:10:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BAB7029D9A for ; Tue, 21 Aug 2018 08:10:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B8F6629DA1; Tue, 21 Aug 2018 08:10:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DFAAE29DEE for ; Tue, 21 Aug 2018 08:10:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726780AbeHUL36 (ORCPT ); Tue, 21 Aug 2018 07:29:58 -0400 Received: from mail-pf1-f178.google.com ([209.85.210.178]:34160 "EHLO mail-pf1-f178.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726723AbeHUL36 (ORCPT ); Tue, 21 Aug 2018 07:29:58 -0400 Received: by mail-pf1-f178.google.com with SMTP id k19-v6so8135002pfi.1 for ; Tue, 21 Aug 2018 01:10:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=3zt+Khuobr5RUdqrSMIsTzX+YCNaFtVf0EUqgmfZh9Y=; b=dMa6eyGEtG+k5ACj8ksXh7YpjZ37uvkyXgvmf2uDI0/6EgTFDz37l9rvsF6hRzeA8V TkeICGDMhtQe6651ChwOqVz1zGdsa6NtyevoHvXqDJFtwGwx6WUSYSiYzUQGR+ayyuaD T3zaFlDu01yPSuAeJ4dWGlzODmCFWaaytX66La81Fxk1lr8CjVW/ZMYDetOWbMaspoZq UAJV/xzYgcz+Ea9FVYWmi09VD+QKwWmY0EyuFIC0lVlI0bqZGcCkE8k9fs61EHxQDPuR 1kIusnpWaHzQn4NEDITWoPB/JrZ7xea+gAhAyp133iiX5aXqeG7B4iWcfJTKHndxoQfd AHlA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=3zt+Khuobr5RUdqrSMIsTzX+YCNaFtVf0EUqgmfZh9Y=; b=GqWtfxtFYavQ+/gNZw5SDhSgXpeHmt9YZVsruZgMeEDMxGxAt9U16SPiQBOFF1GayY co1Kd4Sv+M4Dgkcd1EnJQj7zWmxHsZ+zDoS8QVDvidIaEXumHqNYarOHQAYv5xVivPGN biqKiv98oyfbypMqSXWLfNroER0jHi9ywuvj0gX1Zx8UCTNG6P9uDGZzTtbBkjofxB4L 0N8s/PrHAONwVPxMMQamOR2z6u/bqMjSh/rkLa7AOFqMaKIKwv5NdKFttl2SA1sgcADz wbduGifHWje3UFgVbp5JWaMjzqbeShLukUY+om3zJ85GF+aobxzzDi9CCfsKhnC3Jc/W ZeoA== X-Gm-Message-State: AOUpUlHRB0M6m3hsegkZgmQEfv8qCl+EfE3kfVvEXO3O142yXCyaXETn MJkeJfAjUon6mfuS8uQ+cyk= X-Google-Smtp-Source: AA+uWPybsucCFsVg908KhrGc97NrsLqqOfv8avn3mhtlPgFB7zrejNmBQZEqQXvzlQLgCWGNvCSbdg== X-Received: by 2002:a63:4826:: with SMTP id v38-v6mr10884638pga.379.1534839050117; Tue, 21 Aug 2018 01:10:50 -0700 (PDT) Received: from localhost.localdomain ([203.205.141.40]) by smtp.gmail.com with ESMTPSA id r64-v6sm20644023pfk.157.2018.08.21.01.10.46 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 21 Aug 2018 01:10:49 -0700 (PDT) From: guangrong.xiao@gmail.com X-Google-Original-From: xiaoguangrong@tencent.com To: pbonzini@redhat.com, mst@redhat.com, mtosatti@redhat.com Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org, dgilbert@redhat.com, peterx@redhat.com, wei.w.wang@intel.com, jiang.biao2@zte.com.cn, eblake@redhat.com, Xiao Guangrong Subject: [PATCH v4 02/10] migration: fix counting normal page for compression Date: Tue, 21 Aug 2018 16:10:21 +0800 Message-Id: <20180821081029.26121-3-xiaoguangrong@tencent.com> X-Mailer: git-send-email 2.14.4 In-Reply-To: <20180821081029.26121-1-xiaoguangrong@tencent.com> References: <20180821081029.26121-1-xiaoguangrong@tencent.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Xiao Guangrong The compressed page is not normal page Reviewed-by: Peter Xu Signed-off-by: Xiao Guangrong Reviewed-by: Juan Quintela --- migration/ram.c | 1 - 1 file changed, 1 deletion(-) diff --git a/migration/ram.c b/migration/ram.c index ae9e83c2b6..d631b9a6fe 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -1903,7 +1903,6 @@ retry: qemu_cond_signal(&comp_param[idx].cond); qemu_mutex_unlock(&comp_param[idx].mutex); pages = 1; - ram_counters.normal++; ram_counters.transferred += bytes_xmit; break; } From patchwork Tue Aug 21 08:10:22 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 10571177 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 22E751390 for ; Tue, 21 Aug 2018 08:10:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1642F29D9A for ; Tue, 21 Aug 2018 08:10:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1438629E19; Tue, 21 Aug 2018 08:10:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AB7D629D9A for ; Tue, 21 Aug 2018 08:10:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726768AbeHULaB (ORCPT ); Tue, 21 Aug 2018 07:30:01 -0400 Received: from mail-pf1-f171.google.com ([209.85.210.171]:44566 "EHLO mail-pf1-f171.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726560AbeHULaB (ORCPT ); Tue, 21 Aug 2018 07:30:01 -0400 Received: by mail-pf1-f171.google.com with SMTP id k21-v6so8121350pff.11 for ; Tue, 21 Aug 2018 01:10:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=NsmhccRCp5gWMJ0cmbdhkw17Dbew1msB4Oy/3ymK/ko=; b=AvZxw0tuZk/hGJGvBJF4lHrIGTeeG/JUIgTOcxIgscYLK02WRZsc54vUnM5ItpYrhg yg0miwrpM8dd96UXRjddwM/zcXqxg67T6wV6xSvQBdi58F4nHGJyMUPu9Q2pV9nF59Eh 6OfIDmtUAM2TdBt78UyKwAX3TVHg4QcF+m6KcIjrrHCvREk7vm9AJhe+XrrL9SdWodIF 4X8VUMOV5GwOkU/9DgvqBp0Zy3kSQMDSuuXr+fcyBYobgcggUgDQTl+N82OGkcazd4Tx gQEcihhdIihS04T6seeK/OO+3Nfr4LE+m7pvftHHdo0ek2Ja739mo50QFxZWNC6faJpG D4KQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=NsmhccRCp5gWMJ0cmbdhkw17Dbew1msB4Oy/3ymK/ko=; b=LKeyben9GtIu0KID8qNL+nrMAP00O+NnPS9CZDmqLTN82EtU6SI/3oh4WiK0gtLFs6 X78hNCuul/BHyoFoqLP9YxlrEs9anIOm4aAsf8G4Uj86oMGGrROH4vgfCB7+s8zScQMf 7Vyb0ycKoqrZ4FK87xg0i1H+jvNxLkwbjI1GYEIcBzu81VdQfOoxm4gkxtAJ6l6upllQ VrEgbdqTN7BLiVAMgb1mJzxbOG3xLtTVhbVVpITjUi86FNpXK8390m4GqfY3J+LAS9ii puCrkcIxGwlUCjKveUsC9t+bScGLFzw8gKxdFG9qmwQad8iCCI0o79mtsFyBxga8J+GT Pfow== X-Gm-Message-State: AOUpUlHtUGdDOnVlnUPm8yqUwPbo0+qCp3IwCRpAIuwT67/B357PxUeo hPI653mEhdwkZucSqyxMRjY= X-Google-Smtp-Source: AA+uWPwmx3lfBQJvjuSAyB9JbnOIs2F0XNt2DA1++Q5Cf5BU14dMQ7uH3jPYzx5wzai1Ai1mEgjGGw== X-Received: by 2002:a65:5803:: with SMTP id g3-v6mr46790443pgr.117.1534839053709; Tue, 21 Aug 2018 01:10:53 -0700 (PDT) Received: from localhost.localdomain ([203.205.141.40]) by smtp.gmail.com with ESMTPSA id r64-v6sm20644023pfk.157.2018.08.21.01.10.50 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 21 Aug 2018 01:10:53 -0700 (PDT) From: guangrong.xiao@gmail.com X-Google-Original-From: xiaoguangrong@tencent.com To: pbonzini@redhat.com, mst@redhat.com, mtosatti@redhat.com Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org, dgilbert@redhat.com, peterx@redhat.com, wei.w.wang@intel.com, jiang.biao2@zte.com.cn, eblake@redhat.com, Xiao Guangrong Subject: [PATCH v4 03/10] migration: introduce save_zero_page_to_file Date: Tue, 21 Aug 2018 16:10:22 +0800 Message-Id: <20180821081029.26121-4-xiaoguangrong@tencent.com> X-Mailer: git-send-email 2.14.4 In-Reply-To: <20180821081029.26121-1-xiaoguangrong@tencent.com> References: <20180821081029.26121-1-xiaoguangrong@tencent.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Xiao Guangrong It will be used by the compression threads Reviewed-by: Peter Xu Signed-off-by: Xiao Guangrong Reviewed-by: Juan Quintela --- migration/ram.c | 40 ++++++++++++++++++++++++++++++---------- 1 file changed, 30 insertions(+), 10 deletions(-) diff --git a/migration/ram.c b/migration/ram.c index d631b9a6fe..49ace30614 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -1667,27 +1667,47 @@ static void migration_bitmap_sync(RAMState *rs) /** * save_zero_page: send the zero page to the stream * - * Returns the number of pages written. + * Returns the size of data written to the file, 0 means the page is not + * a zero page * * @rs: current RAM state + * @file: the file where the data is saved * @block: block that contains the page we want to send * @offset: offset inside the block for the page */ -static int save_zero_page(RAMState *rs, RAMBlock *block, ram_addr_t offset) +static int save_zero_page_to_file(RAMState *rs, QEMUFile *file, + RAMBlock *block, ram_addr_t offset) { uint8_t *p = block->host + offset; - int pages = -1; + int len = 0; if (is_zero_range(p, TARGET_PAGE_SIZE)) { - ram_counters.duplicate++; - ram_counters.transferred += - save_page_header(rs, rs->f, block, offset | RAM_SAVE_FLAG_ZERO); - qemu_put_byte(rs->f, 0); - ram_counters.transferred += 1; - pages = 1; + len += save_page_header(rs, file, block, offset | RAM_SAVE_FLAG_ZERO); + qemu_put_byte(file, 0); + len += 1; } + return len; +} - return pages; +/** + * save_zero_page: send the zero page to the stream + * + * Returns the number of pages written. + * + * @rs: current RAM state + * @block: block that contains the page we want to send + * @offset: offset inside the block for the page + */ +static int save_zero_page(RAMState *rs, RAMBlock *block, ram_addr_t offset) +{ + int len = save_zero_page_to_file(rs, rs->f, block, offset); + + if (len) { + ram_counters.duplicate++; + ram_counters.transferred += len; + return 1; + } + return -1; } static void ram_release_pages(const char *rbname, uint64_t offset, int pages) From patchwork Tue Aug 21 08:10:23 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 10571179 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2A226920 for ; Tue, 21 Aug 2018 08:11:00 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1AA9E29DD0 for ; Tue, 21 Aug 2018 08:11:00 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1903329E0F; Tue, 21 Aug 2018 08:11:00 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AC75029DDD for ; Tue, 21 Aug 2018 08:10:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726784AbeHULaF (ORCPT ); Tue, 21 Aug 2018 07:30:05 -0400 Received: from mail-pl0-f42.google.com ([209.85.160.42]:44684 "EHLO mail-pl0-f42.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726549AbeHULaF (ORCPT ); Tue, 21 Aug 2018 07:30:05 -0400 Received: by mail-pl0-f42.google.com with SMTP id ba4-v6so8427062plb.11 for ; Tue, 21 Aug 2018 01:10:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=PgGybqDLCJbEJ+qjtJtsZofD+YTrV5tTXrhVisMfhNg=; b=F2UQT79N+Yd7B7oKuECGWfiPfLflRwBZCRfJDdfepVb9uF7qLmwY6sRc5wQPhkHOAP pybBNetOA8t9ve62FuoEQo6U2dvCYghGdeuBP4/Mp3WN4dnMcm2kgNf+uFUum8s/dq+g 6xa3m2MOT4K7bgsWblDs57+lFH2V2IJE7DhSMRDCgcLqGoKkN6XXwHlTwbtYVyJ6UMt+ SMxWEeSDXg9/97MDii9zMSv+48KGngcn54riDTRzZGIs7riSB4w7n2jtv+VWcTbELYc4 U3VztL4sDV9AbW0C+h9GOyY4i/j3O1F5dD3QRTmMAlB6kB/7ubhIUVPxNNqgo8ny8GVk rPXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=PgGybqDLCJbEJ+qjtJtsZofD+YTrV5tTXrhVisMfhNg=; b=C+gKvmYFwSs8n+JVz7zjWnoLWukpBylfs34ZUaNCsWe2HrRgrCK4/CdTjuRR+hqqV8 55C+YiifZ0NmJQHCPMcogclMcfQlKSP3ieSpLV5JZvwuN/3pBP7geEsXMYxKTP/FxJEv Jpt+ejAh/411B/WiY1d2pk/MJ7pT1a9CIx1vBSF3xObZnNdESw23Hf7jxid3MhgRF5Wp PVFzvi1z+4fKh0cchtbea0ftmQ3VytpxQOfnp0/ynqdMpy5ZOcRMV3mcZyvPGlLrmEts Nat+LcQoEF/FKXQ0k6YmfTtRFafTQxfrf+/LJTgDSUV4GP+ceyQT4YVgSzuTZGcKLfg6 IwmA== X-Gm-Message-State: AOUpUlH/K6KqLTJYpVVjKUzzWsV+TXl3QUR8gcXlJN7QPejKz2QAyx1w uLeqMyDhHCu6IMEsnQElCZw= X-Google-Smtp-Source: AA+uWPzy/Qc/u+tbsalOjtYtB+ULJFz4vB5ym6zGshykZ2vWMcVphIll+QuvrkQJZlXByWFc1pGaSQ== X-Received: by 2002:a17:902:694a:: with SMTP id k10-v6mr48781416plt.166.1534839057333; Tue, 21 Aug 2018 01:10:57 -0700 (PDT) Received: from localhost.localdomain ([203.205.141.40]) by smtp.gmail.com with ESMTPSA id r64-v6sm20644023pfk.157.2018.08.21.01.10.53 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 21 Aug 2018 01:10:56 -0700 (PDT) From: guangrong.xiao@gmail.com X-Google-Original-From: xiaoguangrong@tencent.com To: pbonzini@redhat.com, mst@redhat.com, mtosatti@redhat.com Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org, dgilbert@redhat.com, peterx@redhat.com, wei.w.wang@intel.com, jiang.biao2@zte.com.cn, eblake@redhat.com, Xiao Guangrong Subject: [PATCH v4 04/10] migration: drop the return value of do_compress_ram_page Date: Tue, 21 Aug 2018 16:10:23 +0800 Message-Id: <20180821081029.26121-5-xiaoguangrong@tencent.com> X-Mailer: git-send-email 2.14.4 In-Reply-To: <20180821081029.26121-1-xiaoguangrong@tencent.com> References: <20180821081029.26121-1-xiaoguangrong@tencent.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Xiao Guangrong It is not used and cleans the code up a little Reviewed-by: Peter Xu Signed-off-by: Xiao Guangrong Reviewed-by: Juan Quintela --- migration/ram.c | 26 +++++++++++--------------- 1 file changed, 11 insertions(+), 15 deletions(-) diff --git a/migration/ram.c b/migration/ram.c index 49ace30614..e463de4f69 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -381,8 +381,8 @@ static QemuThread *decompress_threads; static QemuMutex decomp_done_lock; static QemuCond decomp_done_cond; -static int do_compress_ram_page(QEMUFile *f, z_stream *stream, RAMBlock *block, - ram_addr_t offset, uint8_t *source_buf); +static void do_compress_ram_page(QEMUFile *f, z_stream *stream, RAMBlock *block, + ram_addr_t offset, uint8_t *source_buf); static void *do_data_compress(void *opaque) { @@ -1842,15 +1842,14 @@ static int ram_save_multifd_page(RAMState *rs, RAMBlock *block, return 1; } -static int do_compress_ram_page(QEMUFile *f, z_stream *stream, RAMBlock *block, - ram_addr_t offset, uint8_t *source_buf) +static void do_compress_ram_page(QEMUFile *f, z_stream *stream, RAMBlock *block, + ram_addr_t offset, uint8_t *source_buf) { RAMState *rs = ram_state; - int bytes_sent, blen; uint8_t *p = block->host + (offset & TARGET_PAGE_MASK); + int ret; - bytes_sent = save_page_header(rs, f, block, offset | - RAM_SAVE_FLAG_COMPRESS_PAGE); + save_page_header(rs, f, block, offset | RAM_SAVE_FLAG_COMPRESS_PAGE); /* * copy it to a internal buffer to avoid it being modified by VM @@ -1858,17 +1857,14 @@ static int do_compress_ram_page(QEMUFile *f, z_stream *stream, RAMBlock *block, * decompression */ memcpy(source_buf, p, TARGET_PAGE_SIZE); - blen = qemu_put_compression_data(f, stream, source_buf, TARGET_PAGE_SIZE); - if (blen < 0) { - bytes_sent = 0; - qemu_file_set_error(migrate_get_current()->to_dst_file, blen); + ret = qemu_put_compression_data(f, stream, source_buf, TARGET_PAGE_SIZE); + if (ret < 0) { + qemu_file_set_error(migrate_get_current()->to_dst_file, ret); error_report("compressed data failed!"); - } else { - bytes_sent += blen; - ram_release_pages(block->idstr, offset & TARGET_PAGE_MASK, 1); + return; } - return bytes_sent; + ram_release_pages(block->idstr, offset & TARGET_PAGE_MASK, 1); } static void flush_compressed_data(RAMState *rs) From patchwork Tue Aug 21 08:10:24 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 10571181 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0BFD31390 for ; Tue, 21 Aug 2018 08:11:04 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F0E8929D5F for ; Tue, 21 Aug 2018 08:11:03 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EF5EF29D76; Tue, 21 Aug 2018 08:11:03 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5CD5A29DEE for ; Tue, 21 Aug 2018 08:11:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726785AbeHULaJ (ORCPT ); Tue, 21 Aug 2018 07:30:09 -0400 Received: from mail-pf1-f195.google.com ([209.85.210.195]:41321 "EHLO mail-pf1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726549AbeHULaJ (ORCPT ); Tue, 21 Aug 2018 07:30:09 -0400 Received: by mail-pf1-f195.google.com with SMTP id y10-v6so8120084pfn.8 for ; Tue, 21 Aug 2018 01:11:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Mpj8yQuwRdV/vLGpbcOGX12JZpxcXgb1YPncAiiKYpo=; b=e2k/ZV8LKIvSVwCjDK/NDesXCDmw/szfyHg+gjSWnT2hM8ULCNxDEJ0l+fonhMY6zq y83u2ce7eeHz7dLiTMcHuznasgl58W/deevH+uNvEepRlVJatAgUIRfEpEYM1i1m/liR C2U5o9K7+MSbFtYbLhbnJwDKJAr0kslnI+fyBuhuq1vM1m5jRoc+MHMvzmPObOkCjx6n ozDug0666+x+xCYGk/jTR3Mrum8a99fCfa6orDGeMfXvMmZUSA50cS82BjvWaCvFfEa6 tfaASX8+ntGRghts/yoyyjSRJNPp5uOebM+0cA1we0fc0RW5VPcs0G+mywzd6gRerL/O HQsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Mpj8yQuwRdV/vLGpbcOGX12JZpxcXgb1YPncAiiKYpo=; b=jCReyAuBuAqFD7hymjy96f7bYZP5D3MYquuhjYdqVNZzGMldYsUoV2KQSzqx6rn2CJ klkl7xUH9b4h5BhOY3Gf41WTjLhgQhB7X1PLZKYRbFGzUvHqsidux7rOhkIOGT/5YJWz DFN6ky3ZH46PU4Wsa8E1IKy193ldigfaD926M506uf2vTekqBz5DWVetbFXoOk4Mkzc5 0fHHBeGKWKLfR6m9rsP2lWQ4eQgGeziWjYKWLqA8LRqtW3CiJ2i47cyhHD7x81Q+wDWx 72qLF31AtUeu0az/wwnrfKij12j2mGvrpWY8WM5D7zC054hfZgJeJ9i+NW7/9bZ3Jnfe p1fQ== X-Gm-Message-State: APzg51CS0S2v94DvJa6Phb9TYkOeqwEbdpDuQWluEwOFiz+6HIvHhR9o aGLPwjixlwOhIoMufULdIrs= X-Google-Smtp-Source: ANB0Vda4rrfMDEItwJ2eItlqniH6uqSy8b0YqGMrspBugrlXmuQ/VMKff7+qIyXeRsjB9DGxjPE1gA== X-Received: by 2002:a62:1192:: with SMTP id 18-v6mr402628pfr.54.1534839061080; Tue, 21 Aug 2018 01:11:01 -0700 (PDT) Received: from localhost.localdomain ([203.205.141.40]) by smtp.gmail.com with ESMTPSA id r64-v6sm20644023pfk.157.2018.08.21.01.10.57 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 21 Aug 2018 01:11:00 -0700 (PDT) From: guangrong.xiao@gmail.com X-Google-Original-From: xiaoguangrong@tencent.com To: pbonzini@redhat.com, mst@redhat.com, mtosatti@redhat.com Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org, dgilbert@redhat.com, peterx@redhat.com, wei.w.wang@intel.com, jiang.biao2@zte.com.cn, eblake@redhat.com, Xiao Guangrong Subject: [PATCH v4 05/10] migration: move handle of zero page to the thread Date: Tue, 21 Aug 2018 16:10:24 +0800 Message-Id: <20180821081029.26121-6-xiaoguangrong@tencent.com> X-Mailer: git-send-email 2.14.4 In-Reply-To: <20180821081029.26121-1-xiaoguangrong@tencent.com> References: <20180821081029.26121-1-xiaoguangrong@tencent.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Xiao Guangrong Detecting zero page is not a light work, moving it to the thread to speed the main thread up, btw, handling ram_release_pages() for the zero page is moved to the thread as well Reviewed-by: Peter Xu Signed-off-by: Xiao Guangrong Reviewed-by: Juan Quintela --- migration/ram.c | 96 +++++++++++++++++++++++++++++++++++++++++---------------- 1 file changed, 70 insertions(+), 26 deletions(-) diff --git a/migration/ram.c b/migration/ram.c index e463de4f69..d804d01aae 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -340,6 +340,7 @@ typedef struct PageSearchStatus PageSearchStatus; struct CompressParam { bool done; bool quit; + bool zero_page; QEMUFile *file; QemuMutex mutex; QemuCond cond; @@ -381,7 +382,7 @@ static QemuThread *decompress_threads; static QemuMutex decomp_done_lock; static QemuCond decomp_done_cond; -static void do_compress_ram_page(QEMUFile *f, z_stream *stream, RAMBlock *block, +static bool do_compress_ram_page(QEMUFile *f, z_stream *stream, RAMBlock *block, ram_addr_t offset, uint8_t *source_buf); static void *do_data_compress(void *opaque) @@ -389,6 +390,7 @@ static void *do_data_compress(void *opaque) CompressParam *param = opaque; RAMBlock *block; ram_addr_t offset; + bool zero_page; qemu_mutex_lock(¶m->mutex); while (!param->quit) { @@ -398,11 +400,12 @@ static void *do_data_compress(void *opaque) param->block = NULL; qemu_mutex_unlock(¶m->mutex); - do_compress_ram_page(param->file, ¶m->stream, block, offset, - param->originbuf); + zero_page = do_compress_ram_page(param->file, ¶m->stream, + block, offset, param->originbuf); qemu_mutex_lock(&comp_done_lock); param->done = true; + param->zero_page = zero_page; qemu_cond_signal(&comp_done_cond); qemu_mutex_unlock(&comp_done_lock); @@ -1842,13 +1845,19 @@ static int ram_save_multifd_page(RAMState *rs, RAMBlock *block, return 1; } -static void do_compress_ram_page(QEMUFile *f, z_stream *stream, RAMBlock *block, +static bool do_compress_ram_page(QEMUFile *f, z_stream *stream, RAMBlock *block, ram_addr_t offset, uint8_t *source_buf) { RAMState *rs = ram_state; uint8_t *p = block->host + (offset & TARGET_PAGE_MASK); + bool zero_page = false; int ret; + if (save_zero_page_to_file(rs, f, block, offset)) { + zero_page = true; + goto exit; + } + save_page_header(rs, f, block, offset | RAM_SAVE_FLAG_COMPRESS_PAGE); /* @@ -1861,10 +1870,21 @@ static void do_compress_ram_page(QEMUFile *f, z_stream *stream, RAMBlock *block, if (ret < 0) { qemu_file_set_error(migrate_get_current()->to_dst_file, ret); error_report("compressed data failed!"); - return; + return false; } +exit: ram_release_pages(block->idstr, offset & TARGET_PAGE_MASK, 1); + return zero_page; +} + +static void +update_compress_thread_counts(const CompressParam *param, int bytes_xmit) +{ + if (param->zero_page) { + ram_counters.duplicate++; + } + ram_counters.transferred += bytes_xmit; } static void flush_compressed_data(RAMState *rs) @@ -1888,7 +1908,12 @@ static void flush_compressed_data(RAMState *rs) qemu_mutex_lock(&comp_param[idx].mutex); if (!comp_param[idx].quit) { len = qemu_put_qemu_file(rs->f, comp_param[idx].file); - ram_counters.transferred += len; + /* + * it's safe to fetch zero_page without holding comp_done_lock + * as there is no further request submitted to the thread, + * i.e, the thread should be waiting for a request at this point. + */ + update_compress_thread_counts(&comp_param[idx], len); } qemu_mutex_unlock(&comp_param[idx].mutex); } @@ -1919,7 +1944,7 @@ retry: qemu_cond_signal(&comp_param[idx].cond); qemu_mutex_unlock(&comp_param[idx].mutex); pages = 1; - ram_counters.transferred += bytes_xmit; + update_compress_thread_counts(&comp_param[idx], bytes_xmit); break; } } @@ -2193,6 +2218,39 @@ static bool save_page_use_compression(RAMState *rs) return false; } +/* + * try to compress the page before posting it out, return true if the page + * has been properly handled by compression, otherwise needs other + * paths to handle it + */ +static bool save_compress_page(RAMState *rs, RAMBlock *block, ram_addr_t offset) +{ + if (!save_page_use_compression(rs)) { + return false; + } + + /* + * When starting the process of a new block, the first page of + * the block should be sent out before other pages in the same + * block, and all the pages in last block should have been sent + * out, keeping this order is important, because the 'cont' flag + * is used to avoid resending the block name. + * + * We post the fist page as normal page as compression will take + * much CPU resource. + */ + if (block != rs->last_sent_block) { + flush_compressed_data(rs); + return false; + } + + if (compress_page_with_multi_thread(rs, block, offset) > 0) { + return true; + } + + return false; +} + /** * ram_save_target_page: save one target page * @@ -2213,15 +2271,8 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss, return res; } - /* - * When starting the process of a new block, the first page of - * the block should be sent out before other pages in the same - * block, and all the pages in last block should have been sent - * out, keeping this order is important, because the 'cont' flag - * is used to avoid resending the block name. - */ - if (block != rs->last_sent_block && save_page_use_compression(rs)) { - flush_compressed_data(rs); + if (save_compress_page(rs, block, offset)) { + return 1; } res = save_zero_page(rs, block, offset); @@ -2239,17 +2290,10 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss, } /* - * Make sure the first page is sent out before other pages. - * - * we post it as normal page as compression will take much - * CPU resource. + * do not use multifd for compression as the first page in the new + * block should be posted out before sending the compressed page */ - if (block == rs->last_sent_block && save_page_use_compression(rs)) { - res = compress_page_with_multi_thread(rs, block, offset); - if (res > 0) { - return res; - } - } else if (migrate_use_multifd()) { + if (!save_page_use_compression(rs) && migrate_use_multifd()) { return ram_save_multifd_page(rs, block, offset); } From patchwork Tue Aug 21 08:10:25 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 10571183 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8AAA11390 for ; Tue, 21 Aug 2018 08:11:07 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7DFAB29DA7 for ; Tue, 21 Aug 2018 08:11:07 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7C2E229DC8; Tue, 21 Aug 2018 08:11:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0C67C29E03 for ; Tue, 21 Aug 2018 08:11:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726795AbeHULaN (ORCPT ); Tue, 21 Aug 2018 07:30:13 -0400 Received: from mail-pg1-f195.google.com ([209.85.215.195]:47054 "EHLO mail-pg1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726549AbeHULaN (ORCPT ); Tue, 21 Aug 2018 07:30:13 -0400 Received: by mail-pg1-f195.google.com with SMTP id f14-v6so8086232pgv.13 for ; Tue, 21 Aug 2018 01:11:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=9aVx/9MXIX8chFdQ0nNGdumPNvIXauKoEQdqfl233Qw=; b=lsFpKXqjMyEpvQwXohEcz/fgRiYW2aH4F7zLB2m648UhLgt5HfsF6+TNf5zWLZu6XL BKF1llJ0ul7G2Kxosfq7QoG6kpd3HhTDZ7fPbnIq5gE1uLaRwA/SssljTIZ5YRhICrlX 6F5kPUuREaWWg+e48GcrRNdgbJ5Pom7o+pUo879KybTfdUE64gfYtFawMyO7wdDz9drw 0ITF/JpeOWzqlBOU3hnHxacZltERWLVxcx+FtqT0Od4kTlRddyN5i8QOr0ndDg2ExEps L+0/oYYfVTId7UOns/lTNbjzLHlgIrntI/aK7wpb4+lY2u6vTqz+a1bTkF8LCHnVxaqy NH7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=9aVx/9MXIX8chFdQ0nNGdumPNvIXauKoEQdqfl233Qw=; b=RBhtXEQWgW95Va3Ih6awm6aJCMJB29lfZ5guhHFRymLFxtZhgvMlloScINYZShowx/ CAzJ/n5YFAccmuENVCx/IKxafWypVyWpE37ESyCW0meK/PX+Sxr2rNVoND2/5gpIIgk4 fUWEl1jo4jHSpHA+KctwFmUfMaqPgxdUOFh5Gcw9C2e24RT4qgDdxjgholWnuilv/DYC Z3cKNueVop9Ia1zEYuvdM0ZP0ienHXUT8XUHq8ulczMCyhXcfCqFjAJH6h356Vq7njC8 GYONZcxN9XPyPUqwQNNCnWfKc83XLnfHhe/fblOguEq/mrNcvKnFGZrGJMp1QlSAWVJ1 dVzQ== X-Gm-Message-State: AOUpUlHEkdlMkPNsVueINfeZpvV02KwlzV8awbXRzZQ8iBeYnrvqkNb5 ly9ebU9KDGTlYhhlbAwqD40= X-Google-Smtp-Source: AA+uWPyqb9BWmUu/C3oPARoLUwDy7/ggJLgff7L+b5HrrVQ2PDPpwH4iIkHDnC+xPdbIetINUcTenQ== X-Received: by 2002:a63:de10:: with SMTP id f16-v6mr45715875pgg.97.1534839064981; Tue, 21 Aug 2018 01:11:04 -0700 (PDT) Received: from localhost.localdomain ([203.205.141.40]) by smtp.gmail.com with ESMTPSA id r64-v6sm20644023pfk.157.2018.08.21.01.11.01 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 21 Aug 2018 01:11:04 -0700 (PDT) From: guangrong.xiao@gmail.com X-Google-Original-From: xiaoguangrong@tencent.com To: pbonzini@redhat.com, mst@redhat.com, mtosatti@redhat.com Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org, dgilbert@redhat.com, peterx@redhat.com, wei.w.wang@intel.com, jiang.biao2@zte.com.cn, eblake@redhat.com, Xiao Guangrong Subject: [PATCH v4 06/10] migration: hold the lock only if it is really needed Date: Tue, 21 Aug 2018 16:10:25 +0800 Message-Id: <20180821081029.26121-7-xiaoguangrong@tencent.com> X-Mailer: git-send-email 2.14.4 In-Reply-To: <20180821081029.26121-1-xiaoguangrong@tencent.com> References: <20180821081029.26121-1-xiaoguangrong@tencent.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Xiao Guangrong Try to hold src_page_req_mutex only if the queue is not empty Reviewed-by: Dr. David Alan Gilbert Reviewed-by: Peter Xu Signed-off-by: Xiao Guangrong Reviewed-by: Juan Quintela --- include/qemu/queue.h | 1 + migration/ram.c | 4 ++++ 2 files changed, 5 insertions(+) diff --git a/include/qemu/queue.h b/include/qemu/queue.h index 59fd1203a1..ac418efc43 100644 --- a/include/qemu/queue.h +++ b/include/qemu/queue.h @@ -341,6 +341,7 @@ struct { \ /* * Simple queue access methods. */ +#define QSIMPLEQ_EMPTY_ATOMIC(head) (atomic_read(&((head)->sqh_first)) == NULL) #define QSIMPLEQ_EMPTY(head) ((head)->sqh_first == NULL) #define QSIMPLEQ_FIRST(head) ((head)->sqh_first) #define QSIMPLEQ_NEXT(elm, field) ((elm)->field.sqe_next) diff --git a/migration/ram.c b/migration/ram.c index d804d01aae..99ecf9b315 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -2026,6 +2026,10 @@ static RAMBlock *unqueue_page(RAMState *rs, ram_addr_t *offset) { RAMBlock *block = NULL; + if (QSIMPLEQ_EMPTY_ATOMIC(&rs->src_page_requests)) { + return NULL; + } + qemu_mutex_lock(&rs->src_page_req_mutex); if (!QSIMPLEQ_EMPTY(&rs->src_page_requests)) { struct RAMSrcPageRequest *entry = From patchwork Tue Aug 21 08:10:26 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 10571185 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 38E84920 for ; Tue, 21 Aug 2018 08:11:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2A98829D4F for ; Tue, 21 Aug 2018 08:11:11 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1E0DA29DD0; Tue, 21 Aug 2018 08:11:11 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A5CBB29E1E for ; Tue, 21 Aug 2018 08:11:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726800AbeHULaR (ORCPT ); Tue, 21 Aug 2018 07:30:17 -0400 Received: from mail-pl0-f44.google.com ([209.85.160.44]:43889 "EHLO mail-pl0-f44.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726797AbeHULaQ (ORCPT ); Tue, 21 Aug 2018 07:30:16 -0400 Received: by mail-pl0-f44.google.com with SMTP id x6-v6so8431518plv.10 for ; Tue, 21 Aug 2018 01:11:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=JptWhENmTc9OgHjeNPCYSrWFfpBwn6ZucJMoCRDXpKw=; b=YTlNSL6+y8M3dZXnQ3Ea+fPMUj2ttW1Gld9tV8j+IP6udSBP5bCuyx67p5IhQxtLxt sHZ1lKUbLz9fSNB/1NP8sinOzRO9DHO9JDnCrbo2ItWeMHGbcQykuRbuZm6AKO01beys p8TaVeBY9S4eJ7aRV+budiB/ndJfX0xIDKeWSmob19BpmXP4Nv92zP15sLkP1O5K9s1J jesHlojGitY7QqT5NoGidRcPm/UyZyZfUQW6jVYm1wpOplm0+U7BFNYdRqFNcLyE9IZC uAQM9w4gkoLpPatM0x1PZbNEoH37R3sHsQGxxbp7VBNV8onbWkcD3JT5IQ7aK44vE5xG YZyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=JptWhENmTc9OgHjeNPCYSrWFfpBwn6ZucJMoCRDXpKw=; b=qNZgYxHNJMsbcjl7waaKfDgB+0Yc22LyZTItEFfqru/3A/q+JXbMzLmkqFYDuhCKxC ZD+jwL26Kjdf93LQ3JPOQlYKqZCalmWyH512rHxsz4V8iO4JsbRwJct+pB/FBbCAjDN4 NElsuGlU3Q3kNRhRBWdrEwugAnaw1kDmH9DwxSze2ulS8X7yEqfIc7+iQsOZgOSsIacQ dyP5LIMtr21CngfVQY+JSjGzP8CCC8r9q9kihSH3CBMmYby5Qw9xhowEzTt63cGzbHB9 Y5Hqlf525Dvk+iZSvbLTaAqX4G6g2wSloI7XNlKR8Vyh59Ykb6jdjedi5aEAPPO1lXnP CHyw== X-Gm-Message-State: AOUpUlEBwVtAdr+ZORGNg9fm5AEY/tlIkbXWbQqTn7/IMkNz01wMkKlZ 4XUzsBC5gkNWnMz+lUXUtew= X-Google-Smtp-Source: AA+uWPzDPpo377nBrXOXhHDgJMBPFIMn9RiO1cXFIxBg44n1sJtLN1Ur4YOBuG1bK5GOvtp6pDE4Aw== X-Received: by 2002:a17:902:b20d:: with SMTP id t13-v6mr8266086plr.107.1534839068538; Tue, 21 Aug 2018 01:11:08 -0700 (PDT) Received: from localhost.localdomain ([203.205.141.40]) by smtp.gmail.com with ESMTPSA id r64-v6sm20644023pfk.157.2018.08.21.01.11.05 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 21 Aug 2018 01:11:08 -0700 (PDT) From: guangrong.xiao@gmail.com X-Google-Original-From: xiaoguangrong@tencent.com To: pbonzini@redhat.com, mst@redhat.com, mtosatti@redhat.com Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org, dgilbert@redhat.com, peterx@redhat.com, wei.w.wang@intel.com, jiang.biao2@zte.com.cn, eblake@redhat.com, Xiao Guangrong Subject: [PATCH v4 07/10] migration: do not flush_compressed_data at the end of each iteration Date: Tue, 21 Aug 2018 16:10:26 +0800 Message-Id: <20180821081029.26121-8-xiaoguangrong@tencent.com> X-Mailer: git-send-email 2.14.4 In-Reply-To: <20180821081029.26121-1-xiaoguangrong@tencent.com> References: <20180821081029.26121-1-xiaoguangrong@tencent.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Xiao Guangrong flush_compressed_data() needs to wait all compression threads to finish their work, after that all threads are free until the migration feeds new request to them, reducing its call can improve the throughput and use CPU resource more effectively We do not need to flush all threads at the end of iteration, the data can be kept locally until the memory block is changed or memory migration starts over in that case we will meet a dirtied page which may still exists in compression threads's ring Signed-off-by: Xiao Guangrong --- migration/ram.c | 90 +++++++++++++++++++++++++++++++-------------------------- 1 file changed, 49 insertions(+), 41 deletions(-) diff --git a/migration/ram.c b/migration/ram.c index 99ecf9b315..1d54285501 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -1602,6 +1602,47 @@ static void migration_update_rates(RAMState *rs, int64_t end_time) } } +static void +update_compress_thread_counts(const CompressParam *param, int bytes_xmit) +{ + if (param->zero_page) { + ram_counters.duplicate++; + } + ram_counters.transferred += bytes_xmit; +} + +static void flush_compressed_data(RAMState *rs) +{ + int idx, len, thread_count; + + if (!migrate_use_compression()) { + return; + } + thread_count = migrate_compress_threads(); + + qemu_mutex_lock(&comp_done_lock); + for (idx = 0; idx < thread_count; idx++) { + while (!comp_param[idx].done) { + qemu_cond_wait(&comp_done_cond, &comp_done_lock); + } + } + qemu_mutex_unlock(&comp_done_lock); + + for (idx = 0; idx < thread_count; idx++) { + qemu_mutex_lock(&comp_param[idx].mutex); + if (!comp_param[idx].quit) { + len = qemu_put_qemu_file(rs->f, comp_param[idx].file); + /* + * it's safe to fetch zero_page without holding comp_done_lock + * as there is no further request submitted to the thread, + * i.e, the thread should be waiting for a request at this point. + */ + update_compress_thread_counts(&comp_param[idx], len); + } + qemu_mutex_unlock(&comp_param[idx].mutex); + } +} + static void migration_bitmap_sync(RAMState *rs) { RAMBlock *block; @@ -1610,6 +1651,14 @@ static void migration_bitmap_sync(RAMState *rs) ram_counters.dirty_sync_count++; + /* + * if memory migration starts over, we will meet a dirtied page which + * may still exists in compression threads's ring, so we should flush + * the compressed data to make sure the new page is not overwritten by + * the old one in the destination. + */ + flush_compressed_data(rs); + if (!rs->time_last_bitmap_sync) { rs->time_last_bitmap_sync = qemu_clock_get_ms(QEMU_CLOCK_REALTIME); } @@ -1878,47 +1927,6 @@ exit: return zero_page; } -static void -update_compress_thread_counts(const CompressParam *param, int bytes_xmit) -{ - if (param->zero_page) { - ram_counters.duplicate++; - } - ram_counters.transferred += bytes_xmit; -} - -static void flush_compressed_data(RAMState *rs) -{ - int idx, len, thread_count; - - if (!migrate_use_compression()) { - return; - } - thread_count = migrate_compress_threads(); - - qemu_mutex_lock(&comp_done_lock); - for (idx = 0; idx < thread_count; idx++) { - while (!comp_param[idx].done) { - qemu_cond_wait(&comp_done_cond, &comp_done_lock); - } - } - qemu_mutex_unlock(&comp_done_lock); - - for (idx = 0; idx < thread_count; idx++) { - qemu_mutex_lock(&comp_param[idx].mutex); - if (!comp_param[idx].quit) { - len = qemu_put_qemu_file(rs->f, comp_param[idx].file); - /* - * it's safe to fetch zero_page without holding comp_done_lock - * as there is no further request submitted to the thread, - * i.e, the thread should be waiting for a request at this point. - */ - update_compress_thread_counts(&comp_param[idx], len); - } - qemu_mutex_unlock(&comp_param[idx].mutex); - } -} - static inline void set_compress_params(CompressParam *param, RAMBlock *block, ram_addr_t offset) { From patchwork Tue Aug 21 08:10:27 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 10571187 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CFCCE1390 for ; Tue, 21 Aug 2018 08:11:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C3A0B29D5F for ; Tue, 21 Aug 2018 08:11:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C165029E21; Tue, 21 Aug 2018 08:11:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3B48129E03 for ; Tue, 21 Aug 2018 08:11:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726808AbeHULaU (ORCPT ); Tue, 21 Aug 2018 07:30:20 -0400 Received: from mail-pl0-f68.google.com ([209.85.160.68]:41648 "EHLO mail-pl0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726802AbeHULaU (ORCPT ); Tue, 21 Aug 2018 07:30:20 -0400 Received: by mail-pl0-f68.google.com with SMTP id p4-v6so4877959pll.8 for ; Tue, 21 Aug 2018 01:11:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=KhbbMcS7jFEoTFCG7r76No1VG+edyYKzHscFtQ9vfmI=; b=PJEx/FzGvSX7/wPkHSirBaz9qv3UUVJRVa433c7l/MCm6n8SdEAIufC1zetkQg+Qtu rozDjPia6xkDmlyxnIL7RmmVUaKkphtFSpsedes1Y9Wd6JLtEDZcsLNeXvBGbiHmRP7a us+pTMEUunh4DKHmRVEbKN1Qk3GcOcVa63ZVuRr3t7by3UGIrzI3OFB10HFHJv77jJax fXIBbOKk/3w5pwbWXcpm9khrt3TsCGzRaCAcNwzbxaXjyoypV7mqK06wSjrCrYQZIwPr 9KoAXYDPDhE9h/GEu8erlg5Hl2Y5NcX7spH41HHcSg+6UoBISTnsY03wloafdSBT2e2J 1s2A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=KhbbMcS7jFEoTFCG7r76No1VG+edyYKzHscFtQ9vfmI=; b=t0a0topamoa/RQhPvbnA3R9ZK4nSAtCv49QoI0MgzlbI+h1ggCh6WuDKrn4kQV0bcC 3csEBaAQ8Q9Mjjd05XeAEwNJ2lVJSlGAgNaFJzuoVOIxOvzSXVil84R0T3UzlgzzUrBe ekiEt+62PxjA0zk1qAwKvQheTH2qH6GuJeF5uQ7nZLavsFMozi4DSKV6cGmMUEviArUE C1eVu9CUUPV0oxx0empzqd+hwzxcJRKq/eLhHFvmFlAhh5CaBhXDHEJ1o97m02Oq5fsF 6a7DcVO+oBXX79QkMEP8BWdpcpevZtPEuYhBHqzvPt64K5xAPOvUObAk1aHqDP8pKwkF 9VRw== X-Gm-Message-State: AOUpUlHXlOl+qEeuQP0beYb047gjNOX7pXSg/MQHVr5fbPwwbagqbYR7 rpHKrCkEW5iNrjdQ3HAarg8rrGCx X-Google-Smtp-Source: AA+uWPxjAiPHKdfReVI5Qkqfzj2888zaRG6Hz1f6pgSn7irgQRhH0qK1lpGxDlEViXT5gFGoEcQqzQ== X-Received: by 2002:a17:902:33c2:: with SMTP id b60-v6mr48775225plc.11.1534839072181; Tue, 21 Aug 2018 01:11:12 -0700 (PDT) Received: from localhost.localdomain ([203.205.141.40]) by smtp.gmail.com with ESMTPSA id r64-v6sm20644023pfk.157.2018.08.21.01.11.08 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 21 Aug 2018 01:11:11 -0700 (PDT) From: guangrong.xiao@gmail.com X-Google-Original-From: xiaoguangrong@tencent.com To: pbonzini@redhat.com, mst@redhat.com, mtosatti@redhat.com Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org, dgilbert@redhat.com, peterx@redhat.com, wei.w.wang@intel.com, jiang.biao2@zte.com.cn, eblake@redhat.com, Xiao Guangrong Subject: [PATCH v4 08/10] migration: fix calculating xbzrle_counters.cache_miss_rate Date: Tue, 21 Aug 2018 16:10:27 +0800 Message-Id: <20180821081029.26121-9-xiaoguangrong@tencent.com> X-Mailer: git-send-email 2.14.4 In-Reply-To: <20180821081029.26121-1-xiaoguangrong@tencent.com> References: <20180821081029.26121-1-xiaoguangrong@tencent.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Xiao Guangrong As Peter pointed out: | - xbzrle_counters.cache_miss is done in save_xbzrle_page(), so it's | per-guest-page granularity | | - RAMState.iterations is done for each ram_find_and_save_block(), so | it's per-host-page granularity | | An example is that when we migrate a 2M huge page in the guest, we | will only increase the RAMState.iterations by 1 (since | ram_find_and_save_block() will be called once), but we might increase | xbzrle_counters.cache_miss for 2M/4K=512 times (we'll call | save_xbzrle_page() that many times) if all the pages got cache miss. | Then IMHO the cache miss rate will be 512/1=51200% (while it should | actually be just 100% cache miss). And he also suggested as xbzrle_counters.cache_miss_rate is the only user of rs->iterations we can adapt it to count target guest page numbers After that, rename 'iterations' to 'target_page_count' to better reflect its meaning Suggested-by: Peter Xu Signed-off-by: Xiao Guangrong Reviewed-by: Peter Xu --- migration/ram.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/migration/ram.c b/migration/ram.c index 1d54285501..17c3eed445 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -300,10 +300,10 @@ struct RAMState { uint64_t num_dirty_pages_period; /* xbzrle misses since the beginning of the period */ uint64_t xbzrle_cache_miss_prev; - /* number of iterations at the beginning of period */ - uint64_t iterations_prev; - /* Iterations since start */ - uint64_t iterations; + /* total handled target pages at the beginning of period */ + uint64_t target_page_count_prev; + /* total handled target pages since start */ + uint64_t target_page_count; /* number of dirty bits in the bitmap */ uint64_t migration_dirty_pages; /* protects modification of the bitmap */ @@ -1585,19 +1585,19 @@ uint64_t ram_pagesize_summary(void) static void migration_update_rates(RAMState *rs, int64_t end_time) { - uint64_t iter_count = rs->iterations - rs->iterations_prev; + uint64_t page_count = rs->target_page_count - rs->target_page_count_prev; /* calculate period counters */ ram_counters.dirty_pages_rate = rs->num_dirty_pages_period * 1000 / (end_time - rs->time_last_bitmap_sync); - if (!iter_count) { + if (!page_count) { return; } if (migrate_use_xbzrle()) { xbzrle_counters.cache_miss_rate = (double)(xbzrle_counters.cache_miss - - rs->xbzrle_cache_miss_prev) / iter_count; + rs->xbzrle_cache_miss_prev) / page_count; rs->xbzrle_cache_miss_prev = xbzrle_counters.cache_miss; } } @@ -1704,7 +1704,7 @@ static void migration_bitmap_sync(RAMState *rs) migration_update_rates(rs, end_time); - rs->iterations_prev = rs->iterations; + rs->target_page_count_prev = rs->target_page_count; /* reset period counters */ rs->time_last_bitmap_sync = end_time; @@ -3197,7 +3197,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque) done = 1; break; } - rs->iterations++; + rs->target_page_count += pages; /* we want to check in the 1st loop, just in case it was the 1st time and we had to sync the dirty bitmap. From patchwork Tue Aug 21 08:10:28 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 10571189 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C13BA1390 for ; Tue, 21 Aug 2018 08:11:18 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B480729E03 for ; Tue, 21 Aug 2018 08:11:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B28AF29E11; Tue, 21 Aug 2018 08:11:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0EED729E10 for ; Tue, 21 Aug 2018 08:11:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726810AbeHULaY (ORCPT ); Tue, 21 Aug 2018 07:30:24 -0400 Received: from mail-pl0-f65.google.com ([209.85.160.65]:43400 "EHLO mail-pl0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726797AbeHULaY (ORCPT ); Tue, 21 Aug 2018 07:30:24 -0400 Received: by mail-pl0-f65.google.com with SMTP id x6-v6so8431680plv.10 for ; Tue, 21 Aug 2018 01:11:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=PY0HMUGlVcTI61Z4WxvowJg8UtBj3xVN5hjnNSMgBMs=; b=ZvdZuA4dXTH3H6EkUUNVNqi57nCwCUeMOrBl6MHuGsHZEy9rHgkEv/C680XoPpOZBJ IcXjssRV/ja1xQiT68d+ivZEktTAKDv0gJwJReX7shTdOiJOxQg3cjnBvMfeqRsRK6JV 5h3nI/QzAaMnR/ZdAIiwSKSRyV0dv2D/raIDjrl32VkxaO7jd1vOmiNqBVyFgweZAoDt juk5lZbUIOHqYxm3WH8ptSBIOSX9VVK0LXzJxRStt6nGiEKlklBj5+torZdJQA7v8M5F UY2xhOyOFxIAEHo1WJ9nuKUd7qgCO16n4soWeSmSw2aTXiViGNUZ7zl86WWpOSh2Unf5 9G8g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=PY0HMUGlVcTI61Z4WxvowJg8UtBj3xVN5hjnNSMgBMs=; b=UCE5Di0AERjrrOC+fsAyT4Yv9FRwauwsBmSSxaG/dNmXqWV2vJr0tRigxx9wfpnnVX 88N6Osp/221He45EoKxiGEofdEthiJ0tdtE2QLgn2lO7nWSvtjIH6hdyaZPBoPhMjV/X uQbyu+cfIXVTBKvt+G3yl1ua3PZIiruxQIZ7pRO75tkCJ2Hdad+KdOfy2AwcpAU+xnQD CrbFJNZofGcE50hFuTpVecIc+EPF4iaIDI9UxHY0Tn8+Zn7XSK9wZ9uTA/AHCm31WPIi y7a19/jVSm5iFEKzlp+pDo5QHjGx+4amYtUIGv8WNAxlukpU/XlucvIs/IHNoFsCz4FW ofwQ== X-Gm-Message-State: AOUpUlHOmCrDYDIxDNERWMDACMevbDo1YA3Qxengnj6cYtAqLYy+S9g8 c75OOYCo3DbPo28m97ERVBY= X-Google-Smtp-Source: AA+uWPzg9DzUN5y0T10PVGTfb5LFyNlkVAyJhBV2mc4j8Q/3zMdjHI9Jps6CNx5F8Ybre9vSpLjfCw== X-Received: by 2002:a17:902:8f8c:: with SMTP id z12-v6mr48251546plo.4.1534839075787; Tue, 21 Aug 2018 01:11:15 -0700 (PDT) Received: from localhost.localdomain ([203.205.141.40]) by smtp.gmail.com with ESMTPSA id r64-v6sm20644023pfk.157.2018.08.21.01.11.12 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 21 Aug 2018 01:11:15 -0700 (PDT) From: guangrong.xiao@gmail.com X-Google-Original-From: xiaoguangrong@tencent.com To: pbonzini@redhat.com, mst@redhat.com, mtosatti@redhat.com Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org, dgilbert@redhat.com, peterx@redhat.com, wei.w.wang@intel.com, jiang.biao2@zte.com.cn, eblake@redhat.com, Xiao Guangrong Subject: [PATCH v4 09/10] migration: show the statistics of compression Date: Tue, 21 Aug 2018 16:10:28 +0800 Message-Id: <20180821081029.26121-10-xiaoguangrong@tencent.com> X-Mailer: git-send-email 2.14.4 In-Reply-To: <20180821081029.26121-1-xiaoguangrong@tencent.com> References: <20180821081029.26121-1-xiaoguangrong@tencent.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Xiao Guangrong Currently, it includes: pages: amount of pages compressed and transferred to the target VM busy: amount of count that no free thread to compress data busy-rate: rate of thread busy compressed-size: amount of bytes after compression compression-rate: rate of compressed size Reviewed-by: Peter Xu Signed-off-by: Xiao Guangrong --- hmp.c | 13 +++++++++++++ migration/migration.c | 12 ++++++++++++ migration/ram.c | 41 ++++++++++++++++++++++++++++++++++++++++- migration/ram.h | 1 + qapi/migration.json | 26 +++++++++++++++++++++++++- 5 files changed, 91 insertions(+), 2 deletions(-) diff --git a/hmp.c b/hmp.c index 47d36e3ccf..e76e45e672 100644 --- a/hmp.c +++ b/hmp.c @@ -271,6 +271,19 @@ void hmp_info_migrate(Monitor *mon, const QDict *qdict) info->xbzrle_cache->overflow); } + if (info->has_compression) { + monitor_printf(mon, "compression pages: %" PRIu64 " pages\n", + info->compression->pages); + monitor_printf(mon, "compression busy: %" PRIu64 "\n", + info->compression->busy); + monitor_printf(mon, "compression busy rate: %0.2f\n", + info->compression->busy_rate); + monitor_printf(mon, "compressed size: %" PRIu64 "\n", + info->compression->compressed_size); + monitor_printf(mon, "compression rate: %0.2f\n", + info->compression->compression_rate); + } + if (info->has_cpu_throttle_percentage) { monitor_printf(mon, "cpu throttle percentage: %" PRIu64 "\n", info->cpu_throttle_percentage); diff --git a/migration/migration.c b/migration/migration.c index 2ccaadc03d..4da0a20275 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -754,6 +754,18 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s) info->xbzrle_cache->overflow = xbzrle_counters.overflow; } + if (migrate_use_compression()) { + info->has_compression = true; + info->compression = g_malloc0(sizeof(*info->compression)); + info->compression->pages = compression_counters.pages; + info->compression->busy = compression_counters.busy; + info->compression->busy_rate = compression_counters.busy_rate; + info->compression->compressed_size = + compression_counters.compressed_size; + info->compression->compression_rate = + compression_counters.compression_rate; + } + if (cpu_throttle_active()) { info->has_cpu_throttle_percentage = true; info->cpu_throttle_percentage = cpu_throttle_get_percentage(); diff --git a/migration/ram.c b/migration/ram.c index 17c3eed445..0a31767351 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -300,6 +300,15 @@ struct RAMState { uint64_t num_dirty_pages_period; /* xbzrle misses since the beginning of the period */ uint64_t xbzrle_cache_miss_prev; + + /* compression statistics since the beginning of the period */ + /* amount of count that no free thread to compress data */ + uint64_t compress_thread_busy_prev; + /* amount bytes after compression */ + uint64_t compressed_size_prev; + /* amount of compressed pages */ + uint64_t compress_pages_prev; + /* total handled target pages at the beginning of period */ uint64_t target_page_count_prev; /* total handled target pages since start */ @@ -337,6 +346,8 @@ struct PageSearchStatus { }; typedef struct PageSearchStatus PageSearchStatus; +CompressionStats compression_counters; + struct CompressParam { bool done; bool quit; @@ -1586,6 +1597,7 @@ uint64_t ram_pagesize_summary(void) static void migration_update_rates(RAMState *rs, int64_t end_time) { uint64_t page_count = rs->target_page_count - rs->target_page_count_prev; + double compressed_size; /* calculate period counters */ ram_counters.dirty_pages_rate = rs->num_dirty_pages_period * 1000 @@ -1600,15 +1612,41 @@ static void migration_update_rates(RAMState *rs, int64_t end_time) rs->xbzrle_cache_miss_prev) / page_count; rs->xbzrle_cache_miss_prev = xbzrle_counters.cache_miss; } + + if (migrate_use_compression()) { + compression_counters.busy_rate = (double)(compression_counters.busy - + rs->compress_thread_busy_prev) / page_count; + rs->compress_thread_busy_prev = compression_counters.busy; + + compressed_size = compression_counters.compressed_size - + rs->compressed_size_prev; + if (compressed_size) { + double uncompressed_size = (compression_counters.pages - + rs->compress_pages_prev) * TARGET_PAGE_SIZE; + + /* Compression-Ratio = Uncompressed-size / Compressed-size */ + compression_counters.compression_rate = + uncompressed_size / compressed_size; + + rs->compress_pages_prev = compression_counters.pages; + rs->compressed_size_prev = compression_counters.compressed_size; + } + } } static void update_compress_thread_counts(const CompressParam *param, int bytes_xmit) { + ram_counters.transferred += bytes_xmit; + if (param->zero_page) { ram_counters.duplicate++; + return; } - ram_counters.transferred += bytes_xmit; + + /* 8 means a header with RAM_SAVE_FLAG_CONTINUE. */ + compression_counters.compressed_size += bytes_xmit - 8; + compression_counters.pages++; } static void flush_compressed_data(RAMState *rs) @@ -2260,6 +2298,7 @@ static bool save_compress_page(RAMState *rs, RAMBlock *block, ram_addr_t offset) return true; } + compression_counters.busy++; return false; } diff --git a/migration/ram.h b/migration/ram.h index 457bf54b8c..a139066846 100644 --- a/migration/ram.h +++ b/migration/ram.h @@ -36,6 +36,7 @@ extern MigrationStats ram_counters; extern XBZRLECacheStats xbzrle_counters; +extern CompressionStats compression_counters; int xbzrle_cache_resize(int64_t new_size, Error **errp); uint64_t ram_bytes_remaining(void); diff --git a/qapi/migration.json b/qapi/migration.json index 940cb5cbd0..a35a3d01d5 100644 --- a/qapi/migration.json +++ b/qapi/migration.json @@ -75,6 +75,27 @@ 'cache-miss': 'int', 'cache-miss-rate': 'number', 'overflow': 'int' } } +## +# @CompressionStats: +# +# Detailed migration compression statistics +# +# @pages: amount of pages compressed and transferred to the target VM +# +# @busy: count of times that no free thread was available to compress data +# +# @busy-rate: rate of thread busy +# +# @compressed-size: amount of bytes after compression +# +# @compression-rate: rate of compressed size +# +# Since: 3.1 +## +{ 'struct': 'CompressionStats', + 'data': {'pages': 'int', 'busy': 'int', 'busy-rate': 'number', + 'compressed-size': 'int', 'compression-rate': 'number' } } + ## # @MigrationStatus: # @@ -172,6 +193,8 @@ # only present when the postcopy-blocktime migration capability # is enabled. (Since 3.0) # +# @compression: migration compression statistics, only returned if compression +# feature is on and status is 'active' or 'completed' (Since 3.1) # # Since: 0.14.0 ## @@ -186,7 +209,8 @@ '*cpu-throttle-percentage': 'int', '*error-desc': 'str', '*postcopy-blocktime' : 'uint32', - '*postcopy-vcpu-blocktime': ['uint32']} } + '*postcopy-vcpu-blocktime': ['uint32'], + '*compression': 'CompressionStats'} } ## # @query-migrate: From patchwork Tue Aug 21 08:10:29 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 10571191 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6BE2C920 for ; Tue, 21 Aug 2018 08:11:22 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5FE3429E27 for ; Tue, 21 Aug 2018 08:11:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5457829E1C; Tue, 21 Aug 2018 08:11:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DBC4D29DDD for ; Tue, 21 Aug 2018 08:11:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726814AbeHULa2 (ORCPT ); Tue, 21 Aug 2018 07:30:28 -0400 Received: from mail-pl0-f67.google.com ([209.85.160.67]:37477 "EHLO mail-pl0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726813AbeHULa2 (ORCPT ); Tue, 21 Aug 2018 07:30:28 -0400 Received: by mail-pl0-f67.google.com with SMTP id c6-v6so3586478pls.4 for ; Tue, 21 Aug 2018 01:11:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=s4KVnm+0NAaZqGDTvwNNw8jVHzqxYSUPka778ZWEqD4=; b=aV1zVAmZ5ugwlhZeeiykF15FyGE3J14gCdGweNNgnTtYDBGk4WLuct03xXmoMtoxuE jWlPGJPcJh/lDo3pWJ/ytuGEv4+hQjFNpC0Z2bXIGBzJr/JnlayRd4w09pW9V6FqOABb dLKd2daLaAAB9l5E8exB4cf04rnq8jTRshZQFE0jA/1GUfjuVh1M+o+55VY+xOrOW1Fp gt2MWLB5BjMhXYWyMKYIJm4Ka4dhzMDQWR/KbV/XCmHEGi9zbzyxmo7vw3N8wIrW996D 6I+Iml70X+dJA9qP5yXmi2E7s4YqWyqjzsFXAOF2eajqbJeqGxri+jKj/I841pq8G21M tq3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=s4KVnm+0NAaZqGDTvwNNw8jVHzqxYSUPka778ZWEqD4=; b=bLb7Cj2chgMa1Xa6Tmb6JlUcB2O77K6aIAB3QVnCXqX0QtshgyzC/87K0nubHmxRA3 AFVJzZxiVzhFzft5VkUrmvHJreASyu8KTORBWPlTINxjZp5qtbcqfb8cVwRVsMc7VF3k lyggD40u4l4a5UBqqHRa6yDRryij8dJpI3oJWGdE70F86GGwZBqx8CiRepYkZdkrqJeI Ej6z5t+F7A2RXd9dOofKI9d12RlOauWvxyfIt3OvatDb0Pzx80TyDvdUKoq5muUZmWSG DJP1YQXSNhzEo2aQwWkawxxUe0/+vb8R3AdfxPgnytyhh2eDRfpqQnC93xEnKloKdwcQ AktQ== X-Gm-Message-State: AOUpUlFfbVDpNmKT7x9BdsloIWdOLf7IV4vorcVMeCjMQYF4d7lRcRHI U3fAdstb49//OIK5H6S4BKw= X-Google-Smtp-Source: AA+uWPyvbpdG1imdOAWRzHGQ12IS3IcN6lBuLQ/WP5PLaIRHoXMUHXx137z2rVg9KzBHN2jVhvk0eA== X-Received: by 2002:a17:902:6946:: with SMTP id k6-v6mr49172940plt.268.1534839079562; Tue, 21 Aug 2018 01:11:19 -0700 (PDT) Received: from localhost.localdomain ([203.205.141.40]) by smtp.gmail.com with ESMTPSA id r64-v6sm20644023pfk.157.2018.08.21.01.11.16 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 21 Aug 2018 01:11:19 -0700 (PDT) From: guangrong.xiao@gmail.com X-Google-Original-From: xiaoguangrong@tencent.com To: pbonzini@redhat.com, mst@redhat.com, mtosatti@redhat.com Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org, dgilbert@redhat.com, peterx@redhat.com, wei.w.wang@intel.com, jiang.biao2@zte.com.cn, eblake@redhat.com, Xiao Guangrong Subject: [PATCH v4 10/10] migration: handle the error condition properly Date: Tue, 21 Aug 2018 16:10:29 +0800 Message-Id: <20180821081029.26121-11-xiaoguangrong@tencent.com> X-Mailer: git-send-email 2.14.4 In-Reply-To: <20180821081029.26121-1-xiaoguangrong@tencent.com> References: <20180821081029.26121-1-xiaoguangrong@tencent.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Xiao Guangrong ram_find_and_save_block() can return negative if any error hanppens, however, it is completely ignored in current code Signed-off-by: Xiao Guangrong --- migration/ram.c | 18 +++++++++++++++--- 1 file changed, 15 insertions(+), 3 deletions(-) diff --git a/migration/ram.c b/migration/ram.c index 0a31767351..74899b485f 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -2412,7 +2412,8 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss, * * Called within an RCU critical section. * - * Returns the number of pages written where zero means no dirty pages + * Returns the number of pages written where zero means no dirty pages, + * or negative on error * * @rs: current RAM state * @last_stage: if we are at the completion stage @@ -3236,6 +3237,12 @@ static int ram_save_iterate(QEMUFile *f, void *opaque) done = 1; break; } + + if (pages < 0) { + qemu_file_set_error(f, pages); + break; + } + rs->target_page_count += pages; /* we want to check in the 1st loop, just in case it was the 1st time @@ -3278,7 +3285,7 @@ out: /** * ram_save_complete: function called to send the remaining amount of ram * - * Returns zero to indicate success + * Returns zero to indicate success or negative on error * * Called with iothread lock * @@ -3289,6 +3296,7 @@ static int ram_save_complete(QEMUFile *f, void *opaque) { RAMState **temp = opaque; RAMState *rs = *temp; + int ret = 0; rcu_read_lock(); @@ -3309,6 +3317,10 @@ static int ram_save_complete(QEMUFile *f, void *opaque) if (pages == 0) { break; } + if (pages < 0) { + ret = pages; + break; + } } flush_compressed_data(rs); @@ -3320,7 +3332,7 @@ static int ram_save_complete(QEMUFile *f, void *opaque) qemu_put_be64(f, RAM_SAVE_FLAG_EOS); qemu_fflush(f); - return 0; + return ret; } static void ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,