From patchwork Tue Aug 7 09:12:00 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 10558435 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 392A314E5 for ; Tue, 7 Aug 2018 09:14:07 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2949126538 for ; Tue, 7 Aug 2018 09:14:07 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1D94029690; Tue, 7 Aug 2018 09:14:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 8F927296BB for ; Tue, 7 Aug 2018 09:14:02 +0000 (UTC) Received: from localhost ([::1]:38012 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fmy41-000496-Ls for patchwork-qemu-devel@patchwork.kernel.org; Tue, 07 Aug 2018 05:14:01 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:48441) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fmy2j-00035Z-Kq for qemu-devel@nongnu.org; Tue, 07 Aug 2018 05:12:43 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1fmy2h-00054E-UJ for qemu-devel@nongnu.org; Tue, 07 Aug 2018 05:12:41 -0400 Received: from mail-pf1-x42d.google.com ([2607:f8b0:4864:20::42d]:37552) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1fmy2h-00053y-LR for qemu-devel@nongnu.org; Tue, 07 Aug 2018 05:12:39 -0400 Received: by mail-pf1-x42d.google.com with SMTP id a26-v6so8278942pfo.4 for ; Tue, 07 Aug 2018 02:12:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=RQRkGEZOty9XAAuLnX2ufrbtBd41H//NQcO3kT+pIy0=; b=uRxng30ZzByi1Nj8qjSvPylKM8xOujMTZWbDZjdxvHbXGGWcd01PEXt8pS62Zhkvc4 xA7XBY28h9FrJatU/RrtZvQNPSHc4nPmSDJX+pTG0OmooGnJdtoZuPHxUdHtpNVDg3ni 2dBAQnkaIddmNC9TlSJ6cSGQvdKr5JCHjmO6GyywczSFtPrbrt80t4fvtYuOy/G8fCoO Wyd6JiKjBtUcSWB6iwfjivTDcRq0yUACJ971SFAGRRrEHNAy1aeXg1Gv0ZiayyHK0KN9 1ZwsboVmUKkQpfLIFF1gKxUuTO8Ol0nQao83CPxZsr/6fuQAD/WGkZtlOGHczY6zbrEW IZeA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=RQRkGEZOty9XAAuLnX2ufrbtBd41H//NQcO3kT+pIy0=; b=H2UN1qG9/gTTpO6pLQtMkbD6coCpM47RfUrbjZs8LmeWSHuc1xtsy92kx4wOeTzghp 2v3U+a1TLfgyJOpJTVAjqhtCgVTu45P8EmVDz9zQFoHMYwFm498jZq5gD1eo/nzWglBM qfx/raTMOT8Xlhzfzuxo7dJqgDkm/eJt646KbIqRzkyhCgXlN/villsqv6oUcWayH+iT e/BD1Y4GsQRMnz2kSc4hwIcknPFWJ1E9y9eRHycyHc0H0kgPRHo0AkmPvur0LtbkbCaq b3+Rf+EOKJ3EvBj2KZfRIAFOmJt5llC4IM987qnvVdilj4wRP/vz7J4zW7N4OAQM9YKC ILUg== X-Gm-Message-State: AOUpUlErNmBox2NXGA8wS/lNHGw1MnipSGwnKXPY5pw6qbTfAiwikabV cSwP74bN1HMa3+K9CUSpOek= X-Google-Smtp-Source: AAOMgpdwX0IghLV4XqWR+hzh9vZtMWLvR9hpgx2uH7jQIeWaBZ+ocljKbpBwlQX4E7oKPhzpGfdDEQ== X-Received: by 2002:a63:ff4d:: with SMTP id s13-v6mr17982470pgk.150.1533633158804; Tue, 07 Aug 2018 02:12:38 -0700 (PDT) Received: from localhost.localdomain ([203.205.141.37]) by smtp.gmail.com with ESMTPSA id z4-v6sm2159645pfl.11.2018.08.07.02.12.35 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 07 Aug 2018 02:12:38 -0700 (PDT) From: guangrong.xiao@gmail.com X-Google-Original-From: xiaoguangrong@tencent.com To: pbonzini@redhat.com, mst@redhat.com, mtosatti@redhat.com Date: Tue, 7 Aug 2018 17:12:00 +0800 Message-Id: <20180807091209.13531-2-xiaoguangrong@tencent.com> X-Mailer: git-send-email 2.14.4 In-Reply-To: <20180807091209.13531-1-xiaoguangrong@tencent.com> References: <20180807091209.13531-1-xiaoguangrong@tencent.com> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2607:f8b0:4864:20::42d Subject: [Qemu-devel] [PATCH v3 01/10] migration: do not wait for free thread X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kvm@vger.kernel.org, Xiao Guangrong , qemu-devel@nongnu.org, peterx@redhat.com, dgilbert@redhat.com, wei.w.wang@intel.com, jiang.biao2@zte.com.cn Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Xiao Guangrong Instead of putting the main thread to sleep state to wait for free compression thread, we can directly post it out as normal page that reduces the latency and uses CPUs more efficiently A parameter, compress-wait-thread, is introduced, it can be enabled if the user really wants the old behavior Signed-off-by: Xiao Guangrong Reviewed-by: Peter Xu --- hmp.c | 8 ++++++++ migration/migration.c | 21 +++++++++++++++++++++ migration/migration.h | 1 + migration/ram.c | 45 ++++++++++++++++++++++++++------------------- qapi/migration.json | 23 ++++++++++++++++++----- 5 files changed, 74 insertions(+), 24 deletions(-) diff --git a/hmp.c b/hmp.c index 2aafb50e8e..47d36e3ccf 100644 --- a/hmp.c +++ b/hmp.c @@ -327,6 +327,10 @@ void hmp_info_migrate_parameters(Monitor *mon, const QDict *qdict) monitor_printf(mon, "%s: %u\n", MigrationParameter_str(MIGRATION_PARAMETER_COMPRESS_THREADS), params->compress_threads); + assert(params->has_compress_wait_thread); + monitor_printf(mon, "%s: %s\n", + MigrationParameter_str(MIGRATION_PARAMETER_COMPRESS_WAIT_THREAD), + params->compress_wait_thread ? "on" : "off"); assert(params->has_decompress_threads); monitor_printf(mon, "%s: %u\n", MigrationParameter_str(MIGRATION_PARAMETER_DECOMPRESS_THREADS), @@ -1623,6 +1627,10 @@ void hmp_migrate_set_parameter(Monitor *mon, const QDict *qdict) p->has_compress_threads = true; visit_type_int(v, param, &p->compress_threads, &err); break; + case MIGRATION_PARAMETER_COMPRESS_WAIT_THREAD: + p->has_compress_wait_thread = true; + visit_type_bool(v, param, &p->compress_wait_thread, &err); + break; case MIGRATION_PARAMETER_DECOMPRESS_THREADS: p->has_decompress_threads = true; visit_type_int(v, param, &p->decompress_threads, &err); diff --git a/migration/migration.c b/migration/migration.c index b7d9854bda..2ccaadc03d 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -671,6 +671,8 @@ MigrationParameters *qmp_query_migrate_parameters(Error **errp) params->compress_level = s->parameters.compress_level; params->has_compress_threads = true; params->compress_threads = s->parameters.compress_threads; + params->has_compress_wait_thread = true; + params->compress_wait_thread = s->parameters.compress_wait_thread; params->has_decompress_threads = true; params->decompress_threads = s->parameters.decompress_threads; params->has_cpu_throttle_initial = true; @@ -1061,6 +1063,10 @@ static void migrate_params_test_apply(MigrateSetParameters *params, dest->compress_threads = params->compress_threads; } + if (params->has_compress_wait_thread) { + dest->compress_wait_thread = params->compress_wait_thread; + } + if (params->has_decompress_threads) { dest->decompress_threads = params->decompress_threads; } @@ -1126,6 +1132,10 @@ static void migrate_params_apply(MigrateSetParameters *params, Error **errp) s->parameters.compress_threads = params->compress_threads; } + if (params->has_compress_wait_thread) { + s->parameters.compress_wait_thread = params->compress_wait_thread; + } + if (params->has_decompress_threads) { s->parameters.decompress_threads = params->decompress_threads; } @@ -1871,6 +1881,15 @@ int migrate_compress_threads(void) return s->parameters.compress_threads; } +int migrate_compress_wait_thread(void) +{ + MigrationState *s; + + s = migrate_get_current(); + + return s->parameters.compress_wait_thread; +} + int migrate_decompress_threads(void) { MigrationState *s; @@ -3131,6 +3150,8 @@ static Property migration_properties[] = { DEFINE_PROP_UINT8("x-compress-threads", MigrationState, parameters.compress_threads, DEFAULT_MIGRATE_COMPRESS_THREAD_COUNT), + DEFINE_PROP_BOOL("x-compress-wait-thread", MigrationState, + parameters.compress_wait_thread, true), DEFINE_PROP_UINT8("x-decompress-threads", MigrationState, parameters.decompress_threads, DEFAULT_MIGRATE_DECOMPRESS_THREAD_COUNT), diff --git a/migration/migration.h b/migration/migration.h index 64a7b33735..a46b9e6c8d 100644 --- a/migration/migration.h +++ b/migration/migration.h @@ -271,6 +271,7 @@ bool migrate_use_return_path(void); bool migrate_use_compression(void); int migrate_compress_level(void); int migrate_compress_threads(void); +int migrate_compress_wait_thread(void); int migrate_decompress_threads(void); bool migrate_use_events(void); bool migrate_postcopy_blocktime(void); diff --git a/migration/ram.c b/migration/ram.c index 24dea2730c..ae9e83c2b6 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -1889,30 +1889,34 @@ static int compress_page_with_multi_thread(RAMState *rs, RAMBlock *block, ram_addr_t offset) { int idx, thread_count, bytes_xmit = -1, pages = -1; + bool wait = migrate_compress_wait_thread(); thread_count = migrate_compress_threads(); qemu_mutex_lock(&comp_done_lock); - while (true) { - for (idx = 0; idx < thread_count; idx++) { - if (comp_param[idx].done) { - comp_param[idx].done = false; - bytes_xmit = qemu_put_qemu_file(rs->f, comp_param[idx].file); - qemu_mutex_lock(&comp_param[idx].mutex); - set_compress_params(&comp_param[idx], block, offset); - qemu_cond_signal(&comp_param[idx].cond); - qemu_mutex_unlock(&comp_param[idx].mutex); - pages = 1; - ram_counters.normal++; - ram_counters.transferred += bytes_xmit; - break; - } - } - if (pages > 0) { +retry: + for (idx = 0; idx < thread_count; idx++) { + if (comp_param[idx].done) { + comp_param[idx].done = false; + bytes_xmit = qemu_put_qemu_file(rs->f, comp_param[idx].file); + qemu_mutex_lock(&comp_param[idx].mutex); + set_compress_params(&comp_param[idx], block, offset); + qemu_cond_signal(&comp_param[idx].cond); + qemu_mutex_unlock(&comp_param[idx].mutex); + pages = 1; + ram_counters.normal++; + ram_counters.transferred += bytes_xmit; break; - } else { - qemu_cond_wait(&comp_done_cond, &comp_done_lock); } } + + /* + * wait for the free thread if the user specifies 'compress-wait-thread', + * otherwise we will post the page out in the main thread as normal page. + */ + if (pages < 0 && wait) { + qemu_cond_wait(&comp_done_cond, &comp_done_lock); + goto retry; + } qemu_mutex_unlock(&comp_done_lock); return pages; @@ -2226,7 +2230,10 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss, * CPU resource. */ if (block == rs->last_sent_block && save_page_use_compression(rs)) { - return compress_page_with_multi_thread(rs, block, offset); + res = compress_page_with_multi_thread(rs, block, offset); + if (res > 0) { + return res; + } } else if (migrate_use_multifd()) { return ram_save_multifd_page(rs, block, offset); } diff --git a/qapi/migration.json b/qapi/migration.json index 186e8a7303..e6991fcbd2 100644 --- a/qapi/migration.json +++ b/qapi/migration.json @@ -462,6 +462,11 @@ # @compress-threads: Set compression thread count to be used in live migration, # the compression thread count is an integer between 1 and 255. # +# @compress-wait-thread: Wait if no thread is free to compress the memory page +# if it's enabled, otherwise, the page will be posted out immediately +# in the main thread without compression. It's true on default. +# (Since: 3.1) +# # @decompress-threads: Set decompression thread count to be used in live # migration, the decompression thread count is an integer between 1 # and 255. Usually, decompression is at least 4 times as fast as @@ -526,11 +531,11 @@ # Since: 2.4 ## { 'enum': 'MigrationParameter', - 'data': ['compress-level', 'compress-threads', 'decompress-threads', - 'cpu-throttle-initial', 'cpu-throttle-increment', - 'tls-creds', 'tls-hostname', 'max-bandwidth', - 'downtime-limit', 'x-checkpoint-delay', 'block-incremental', - 'x-multifd-channels', 'x-multifd-page-count', + 'data': ['compress-level', 'compress-threads', 'compress-wait-thread', + 'decompress-threads', 'cpu-throttle-initial', + 'cpu-throttle-increment', 'tls-creds', 'tls-hostname', + 'max-bandwidth', 'downtime-limit', 'x-checkpoint-delay', + 'block-incremental', 'x-multifd-channels', 'x-multifd-page-count', 'xbzrle-cache-size', 'max-postcopy-bandwidth' ] } ## @@ -540,6 +545,9 @@ # # @compress-threads: compression thread count # +# @compress-wait-thread: Wait if no thread is free to compress the memory page +# (Since: 3.1) +# # @decompress-threads: decompression thread count # # @cpu-throttle-initial: Initial percentage of time guest cpus are @@ -610,6 +618,7 @@ { 'struct': 'MigrateSetParameters', 'data': { '*compress-level': 'int', '*compress-threads': 'int', + '*compress-wait-thread': 'bool', '*decompress-threads': 'int', '*cpu-throttle-initial': 'int', '*cpu-throttle-increment': 'int', @@ -649,6 +658,9 @@ # # @compress-threads: compression thread count # +# @compress-wait-thread: Wait if no thread is free to compress the memory page +# (Since: 3.1) +# # @decompress-threads: decompression thread count # # @cpu-throttle-initial: Initial percentage of time guest cpus are @@ -714,6 +726,7 @@ { 'struct': 'MigrationParameters', 'data': { '*compress-level': 'uint8', '*compress-threads': 'uint8', + '*compress-wait-thread': 'bool', '*decompress-threads': 'uint8', '*cpu-throttle-initial': 'uint8', '*cpu-throttle-increment': 'uint8',