From patchwork Thu Jul 19 12:15:13 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 10534241 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id BAC58600F4 for ; Thu, 19 Jul 2018 12:15:48 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A6A36293A6 for ; Thu, 19 Jul 2018 12:15:48 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 967A5294BB; Thu, 19 Jul 2018 12:15:48 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, FREEMAIL_FROM, MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D015E293A6 for ; Thu, 19 Jul 2018 12:15:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730443AbeGSM6i (ORCPT ); Thu, 19 Jul 2018 08:58:38 -0400 Received: from mail-pg1-f173.google.com ([209.85.215.173]:38813 "EHLO mail-pg1-f173.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730096AbeGSM6i (ORCPT ); Thu, 19 Jul 2018 08:58:38 -0400 Received: by mail-pg1-f173.google.com with SMTP id k3-v6so3588020pgq.5 for ; Thu, 19 Jul 2018 05:15:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=wIkMXGa4+XsJUNU1At1Pyq6kscGlTfiG6YmWlcF9meI=; b=CC7Z4aOz0fuJQf2gu3H7BL5L8N/cM1e5suY+FinHOWb5LzYAhpDK+vlQ+piSFRxGbd T7BHnn2sj3J22NFTcI0IdZTJM9nN+EKAiX160jDEvJLvJve3IQsR/ZiXYDtuwi0hpYBE RTiDgg1N+cC15AikusLxwJEwUQ56ncTDS1zSNwBAO5Ae2omddbDIuaRzdBcZ4L2RFmKL lclzztRrR959CrOspgHhkDHQ0H5nbwY665sJ3vMN7scriG6HR84KJHSmyoDGK7M05Rul s3OGEVGMvSyBiLNl5IPPM+PEXfH78oAoI9RXXKYDEUsDjkZSdXxAXPeaa4/6lriZiWbp x/Qw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=wIkMXGa4+XsJUNU1At1Pyq6kscGlTfiG6YmWlcF9meI=; b=OXlJDokPjCSCQzaP1o8zAn7Sag4UHwnhAglQf3Gfh+7V7A++zQjN1jrYMxYLurb319 Q+p8yUbtxPxUj52XgiAK95bGdCc67Ub4d/OSWXvinbm7H7v29rY1510S790llylDxe9k EO0P5R/jplRWqR9KhFBf2dm0yo+wAzReWWxq0ZWLAk+R22SR59iCAV7opt3AhbQun9nQ 6ryPDnOjqXl13tIRU5lekRIfTgvdq0kDNHNx2AiHsGVMbse7f00fTikxwN3nyNu3cyvk TXa49qerHYuHqf6/TPYM4H98GRTKZk9zjflfi5grplgTEoESKyFR/X/w3Dw8GbFEoSua 5G4w== X-Gm-Message-State: AOUpUlHrnIgW1QcCgN6hmCW+Ydl7kv0zVL3Lx23zrU916uLWt+ZGvR0J D7dn0eFqko5gvNJV7BEro+A= X-Google-Smtp-Source: AAOMgpd0+qavtrd2GJ6sJHWzmU145so4kN1JABA7pxwBVHClRqa+UlvO/6zMj/HM1BOMlIbwyroe9g== X-Received: by 2002:a62:e30c:: with SMTP id g12-v6mr9393923pfh.25.1532002544474; Thu, 19 Jul 2018 05:15:44 -0700 (PDT) Received: from localhost.localdomain ([203.205.141.37]) by smtp.gmail.com with ESMTPSA id h190-v6sm21270456pge.85.2018.07.19.05.15.41 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 19 Jul 2018 05:15:43 -0700 (PDT) From: guangrong.xiao@gmail.com X-Google-Original-From: xiaoguangrong@tencent.com To: pbonzini@redhat.com, mst@redhat.com, mtosatti@redhat.com Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org, dgilbert@redhat.com, peterx@redhat.com, wei.w.wang@intel.com, jiang.biao2@zte.com.cn, eblake@redhat.com, Xiao Guangrong Subject: [PATCH v2 1/8] migration: do not wait for free thread Date: Thu, 19 Jul 2018 20:15:13 +0800 Message-Id: <20180719121520.30026-2-xiaoguangrong@tencent.com> X-Mailer: git-send-email 2.14.4 In-Reply-To: <20180719121520.30026-1-xiaoguangrong@tencent.com> References: <20180719121520.30026-1-xiaoguangrong@tencent.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Xiao Guangrong Instead of putting the main thread to sleep state to wait for free compression thread, we can directly post it out as normal page that reduces the latency and uses CPUs more efficiently A parameter, compress-wait-thread, is introduced, it can be enabled if the user really wants the old behavior Signed-off-by: Xiao Guangrong --- hmp.c | 8 ++++++++ migration/migration.c | 21 +++++++++++++++++++++ migration/migration.h | 1 + migration/ram.c | 45 ++++++++++++++++++++++++++------------------- qapi/migration.json | 23 ++++++++++++++++++----- 5 files changed, 74 insertions(+), 24 deletions(-) diff --git a/hmp.c b/hmp.c index 2aafb50e8e..47d36e3ccf 100644 --- a/hmp.c +++ b/hmp.c @@ -327,6 +327,10 @@ void hmp_info_migrate_parameters(Monitor *mon, const QDict *qdict) monitor_printf(mon, "%s: %u\n", MigrationParameter_str(MIGRATION_PARAMETER_COMPRESS_THREADS), params->compress_threads); + assert(params->has_compress_wait_thread); + monitor_printf(mon, "%s: %s\n", + MigrationParameter_str(MIGRATION_PARAMETER_COMPRESS_WAIT_THREAD), + params->compress_wait_thread ? "on" : "off"); assert(params->has_decompress_threads); monitor_printf(mon, "%s: %u\n", MigrationParameter_str(MIGRATION_PARAMETER_DECOMPRESS_THREADS), @@ -1623,6 +1627,10 @@ void hmp_migrate_set_parameter(Monitor *mon, const QDict *qdict) p->has_compress_threads = true; visit_type_int(v, param, &p->compress_threads, &err); break; + case MIGRATION_PARAMETER_COMPRESS_WAIT_THREAD: + p->has_compress_wait_thread = true; + visit_type_bool(v, param, &p->compress_wait_thread, &err); + break; case MIGRATION_PARAMETER_DECOMPRESS_THREADS: p->has_decompress_threads = true; visit_type_int(v, param, &p->decompress_threads, &err); diff --git a/migration/migration.c b/migration/migration.c index 8d56d56930..0af75465b3 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -671,6 +671,8 @@ MigrationParameters *qmp_query_migrate_parameters(Error **errp) params->compress_level = s->parameters.compress_level; params->has_compress_threads = true; params->compress_threads = s->parameters.compress_threads; + params->has_compress_wait_thread = true; + params->compress_wait_thread = s->parameters.compress_wait_thread; params->has_decompress_threads = true; params->decompress_threads = s->parameters.decompress_threads; params->has_cpu_throttle_initial = true; @@ -1061,6 +1063,10 @@ static void migrate_params_test_apply(MigrateSetParameters *params, dest->compress_threads = params->compress_threads; } + if (params->has_compress_wait_thread) { + dest->compress_wait_thread = params->compress_wait_thread; + } + if (params->has_decompress_threads) { dest->decompress_threads = params->decompress_threads; } @@ -1126,6 +1132,10 @@ static void migrate_params_apply(MigrateSetParameters *params, Error **errp) s->parameters.compress_threads = params->compress_threads; } + if (params->has_compress_wait_thread) { + s->parameters.compress_wait_thread = params->compress_wait_thread; + } + if (params->has_decompress_threads) { s->parameters.decompress_threads = params->decompress_threads; } @@ -1852,6 +1862,15 @@ int migrate_compress_threads(void) return s->parameters.compress_threads; } +int migrate_compress_wait_thread(void) +{ + MigrationState *s; + + s = migrate_get_current(); + + return s->parameters.compress_wait_thread; +} + int migrate_decompress_threads(void) { MigrationState *s; @@ -3113,6 +3132,8 @@ static Property migration_properties[] = { DEFINE_PROP_UINT8("x-compress-threads", MigrationState, parameters.compress_threads, DEFAULT_MIGRATE_COMPRESS_THREAD_COUNT), + DEFINE_PROP_BOOL("x-compress-wait-thread", MigrationState, + parameters.compress_wait_thread, false), DEFINE_PROP_UINT8("x-decompress-threads", MigrationState, parameters.decompress_threads, DEFAULT_MIGRATE_DECOMPRESS_THREAD_COUNT), diff --git a/migration/migration.h b/migration/migration.h index 64a7b33735..a46b9e6c8d 100644 --- a/migration/migration.h +++ b/migration/migration.h @@ -271,6 +271,7 @@ bool migrate_use_return_path(void); bool migrate_use_compression(void); int migrate_compress_level(void); int migrate_compress_threads(void); +int migrate_compress_wait_thread(void); int migrate_decompress_threads(void); bool migrate_use_events(void); bool migrate_postcopy_blocktime(void); diff --git a/migration/ram.c b/migration/ram.c index 52dd678092..0ad234c692 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -1889,30 +1889,34 @@ static int compress_page_with_multi_thread(RAMState *rs, RAMBlock *block, ram_addr_t offset) { int idx, thread_count, bytes_xmit = -1, pages = -1; + bool wait = migrate_compress_wait_thread(); thread_count = migrate_compress_threads(); qemu_mutex_lock(&comp_done_lock); - while (true) { - for (idx = 0; idx < thread_count; idx++) { - if (comp_param[idx].done) { - comp_param[idx].done = false; - bytes_xmit = qemu_put_qemu_file(rs->f, comp_param[idx].file); - qemu_mutex_lock(&comp_param[idx].mutex); - set_compress_params(&comp_param[idx], block, offset); - qemu_cond_signal(&comp_param[idx].cond); - qemu_mutex_unlock(&comp_param[idx].mutex); - pages = 1; - ram_counters.normal++; - ram_counters.transferred += bytes_xmit; - break; - } - } - if (pages > 0) { +retry: + for (idx = 0; idx < thread_count; idx++) { + if (comp_param[idx].done) { + comp_param[idx].done = false; + bytes_xmit = qemu_put_qemu_file(rs->f, comp_param[idx].file); + qemu_mutex_lock(&comp_param[idx].mutex); + set_compress_params(&comp_param[idx], block, offset); + qemu_cond_signal(&comp_param[idx].cond); + qemu_mutex_unlock(&comp_param[idx].mutex); + pages = 1; + ram_counters.normal++; + ram_counters.transferred += bytes_xmit; break; - } else { - qemu_cond_wait(&comp_done_cond, &comp_done_lock); } } + + /* + * if there is no thread is free to compress the data and the user + * really expects the slowdown, wait it. + */ + if (pages < 0 && wait) { + qemu_cond_wait(&comp_done_cond, &comp_done_lock); + goto retry; + } qemu_mutex_unlock(&comp_done_lock); return pages; @@ -2226,7 +2230,10 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss, * CPU resource. */ if (block == rs->last_sent_block && save_page_use_compression(rs)) { - return compress_page_with_multi_thread(rs, block, offset); + res = compress_page_with_multi_thread(rs, block, offset); + if (res > 0) { + return res; + } } else if (migrate_use_multifd()) { return ram_save_multifd_page(rs, block, offset); } diff --git a/qapi/migration.json b/qapi/migration.json index 186e8a7303..b4f394844b 100644 --- a/qapi/migration.json +++ b/qapi/migration.json @@ -462,6 +462,11 @@ # @compress-threads: Set compression thread count to be used in live migration, # the compression thread count is an integer between 1 and 255. # +# @compress-wait-thread: Wait if no thread is free to compress the memory page +# if it's enabled, otherwise, the page will be posted out immediately +# in the main thread without compression. It's off on default. +# (Since: 3.0) +# # @decompress-threads: Set decompression thread count to be used in live # migration, the decompression thread count is an integer between 1 # and 255. Usually, decompression is at least 4 times as fast as @@ -526,11 +531,11 @@ # Since: 2.4 ## { 'enum': 'MigrationParameter', - 'data': ['compress-level', 'compress-threads', 'decompress-threads', - 'cpu-throttle-initial', 'cpu-throttle-increment', - 'tls-creds', 'tls-hostname', 'max-bandwidth', - 'downtime-limit', 'x-checkpoint-delay', 'block-incremental', - 'x-multifd-channels', 'x-multifd-page-count', + 'data': ['compress-level', 'compress-threads', 'compress-wait-thread', + 'decompress-threads', 'cpu-throttle-initial', + 'cpu-throttle-increment', 'tls-creds', 'tls-hostname', + 'max-bandwidth', 'downtime-limit', 'x-checkpoint-delay', + 'block-incremental', 'x-multifd-channels', 'x-multifd-page-count', 'xbzrle-cache-size', 'max-postcopy-bandwidth' ] } ## @@ -540,6 +545,9 @@ # # @compress-threads: compression thread count # +# @compress-wait-thread: Wait if no thread is free to compress the memory page +# (Since: 3.0) +# # @decompress-threads: decompression thread count # # @cpu-throttle-initial: Initial percentage of time guest cpus are @@ -610,6 +618,7 @@ { 'struct': 'MigrateSetParameters', 'data': { '*compress-level': 'int', '*compress-threads': 'int', + '*compress-wait-thread': 'bool', '*decompress-threads': 'int', '*cpu-throttle-initial': 'int', '*cpu-throttle-increment': 'int', @@ -649,6 +658,9 @@ # # @compress-threads: compression thread count # +# @compress-wait-thread: Wait if no thread is free to compress the memory page +# (Since: 3.0) +# # @decompress-threads: decompression thread count # # @cpu-throttle-initial: Initial percentage of time guest cpus are @@ -714,6 +726,7 @@ { 'struct': 'MigrationParameters', 'data': { '*compress-level': 'uint8', '*compress-threads': 'uint8', + '*compress-wait-thread': 'bool', '*decompress-threads': 'uint8', '*cpu-throttle-initial': 'uint8', '*cpu-throttle-increment': 'uint8',