From patchwork Tue Aug 21 08:10:26 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 10571185 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 38E84920 for ; Tue, 21 Aug 2018 08:11:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2A98829D4F for ; Tue, 21 Aug 2018 08:11:11 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1E0DA29DD0; Tue, 21 Aug 2018 08:11:11 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A5CBB29E1E for ; Tue, 21 Aug 2018 08:11:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726800AbeHULaR (ORCPT ); Tue, 21 Aug 2018 07:30:17 -0400 Received: from mail-pl0-f44.google.com ([209.85.160.44]:43889 "EHLO mail-pl0-f44.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726797AbeHULaQ (ORCPT ); Tue, 21 Aug 2018 07:30:16 -0400 Received: by mail-pl0-f44.google.com with SMTP id x6-v6so8431518plv.10 for ; Tue, 21 Aug 2018 01:11:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=JptWhENmTc9OgHjeNPCYSrWFfpBwn6ZucJMoCRDXpKw=; b=YTlNSL6+y8M3dZXnQ3Ea+fPMUj2ttW1Gld9tV8j+IP6udSBP5bCuyx67p5IhQxtLxt sHZ1lKUbLz9fSNB/1NP8sinOzRO9DHO9JDnCrbo2ItWeMHGbcQykuRbuZm6AKO01beys p8TaVeBY9S4eJ7aRV+budiB/ndJfX0xIDKeWSmob19BpmXP4Nv92zP15sLkP1O5K9s1J jesHlojGitY7QqT5NoGidRcPm/UyZyZfUQW6jVYm1wpOplm0+U7BFNYdRqFNcLyE9IZC uAQM9w4gkoLpPatM0x1PZbNEoH37R3sHsQGxxbp7VBNV8onbWkcD3JT5IQ7aK44vE5xG YZyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=JptWhENmTc9OgHjeNPCYSrWFfpBwn6ZucJMoCRDXpKw=; b=qNZgYxHNJMsbcjl7waaKfDgB+0Yc22LyZTItEFfqru/3A/q+JXbMzLmkqFYDuhCKxC ZD+jwL26Kjdf93LQ3JPOQlYKqZCalmWyH512rHxsz4V8iO4JsbRwJct+pB/FBbCAjDN4 NElsuGlU3Q3kNRhRBWdrEwugAnaw1kDmH9DwxSze2ulS8X7yEqfIc7+iQsOZgOSsIacQ dyP5LIMtr21CngfVQY+JSjGzP8CCC8r9q9kihSH3CBMmYby5Qw9xhowEzTt63cGzbHB9 Y5Hqlf525Dvk+iZSvbLTaAqX4G6g2wSloI7XNlKR8Vyh59Ykb6jdjedi5aEAPPO1lXnP CHyw== X-Gm-Message-State: AOUpUlEBwVtAdr+ZORGNg9fm5AEY/tlIkbXWbQqTn7/IMkNz01wMkKlZ 4XUzsBC5gkNWnMz+lUXUtew= X-Google-Smtp-Source: AA+uWPzDPpo377nBrXOXhHDgJMBPFIMn9RiO1cXFIxBg44n1sJtLN1Ur4YOBuG1bK5GOvtp6pDE4Aw== X-Received: by 2002:a17:902:b20d:: with SMTP id t13-v6mr8266086plr.107.1534839068538; Tue, 21 Aug 2018 01:11:08 -0700 (PDT) Received: from localhost.localdomain ([203.205.141.40]) by smtp.gmail.com with ESMTPSA id r64-v6sm20644023pfk.157.2018.08.21.01.11.05 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 21 Aug 2018 01:11:08 -0700 (PDT) From: guangrong.xiao@gmail.com X-Google-Original-From: xiaoguangrong@tencent.com To: pbonzini@redhat.com, mst@redhat.com, mtosatti@redhat.com Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org, dgilbert@redhat.com, peterx@redhat.com, wei.w.wang@intel.com, jiang.biao2@zte.com.cn, eblake@redhat.com, Xiao Guangrong Subject: [PATCH v4 07/10] migration: do not flush_compressed_data at the end of each iteration Date: Tue, 21 Aug 2018 16:10:26 +0800 Message-Id: <20180821081029.26121-8-xiaoguangrong@tencent.com> X-Mailer: git-send-email 2.14.4 In-Reply-To: <20180821081029.26121-1-xiaoguangrong@tencent.com> References: <20180821081029.26121-1-xiaoguangrong@tencent.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Xiao Guangrong flush_compressed_data() needs to wait all compression threads to finish their work, after that all threads are free until the migration feeds new request to them, reducing its call can improve the throughput and use CPU resource more effectively We do not need to flush all threads at the end of iteration, the data can be kept locally until the memory block is changed or memory migration starts over in that case we will meet a dirtied page which may still exists in compression threads's ring Signed-off-by: Xiao Guangrong --- migration/ram.c | 90 +++++++++++++++++++++++++++++++-------------------------- 1 file changed, 49 insertions(+), 41 deletions(-) diff --git a/migration/ram.c b/migration/ram.c index 99ecf9b315..1d54285501 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -1602,6 +1602,47 @@ static void migration_update_rates(RAMState *rs, int64_t end_time) } } +static void +update_compress_thread_counts(const CompressParam *param, int bytes_xmit) +{ + if (param->zero_page) { + ram_counters.duplicate++; + } + ram_counters.transferred += bytes_xmit; +} + +static void flush_compressed_data(RAMState *rs) +{ + int idx, len, thread_count; + + if (!migrate_use_compression()) { + return; + } + thread_count = migrate_compress_threads(); + + qemu_mutex_lock(&comp_done_lock); + for (idx = 0; idx < thread_count; idx++) { + while (!comp_param[idx].done) { + qemu_cond_wait(&comp_done_cond, &comp_done_lock); + } + } + qemu_mutex_unlock(&comp_done_lock); + + for (idx = 0; idx < thread_count; idx++) { + qemu_mutex_lock(&comp_param[idx].mutex); + if (!comp_param[idx].quit) { + len = qemu_put_qemu_file(rs->f, comp_param[idx].file); + /* + * it's safe to fetch zero_page without holding comp_done_lock + * as there is no further request submitted to the thread, + * i.e, the thread should be waiting for a request at this point. + */ + update_compress_thread_counts(&comp_param[idx], len); + } + qemu_mutex_unlock(&comp_param[idx].mutex); + } +} + static void migration_bitmap_sync(RAMState *rs) { RAMBlock *block; @@ -1610,6 +1651,14 @@ static void migration_bitmap_sync(RAMState *rs) ram_counters.dirty_sync_count++; + /* + * if memory migration starts over, we will meet a dirtied page which + * may still exists in compression threads's ring, so we should flush + * the compressed data to make sure the new page is not overwritten by + * the old one in the destination. + */ + flush_compressed_data(rs); + if (!rs->time_last_bitmap_sync) { rs->time_last_bitmap_sync = qemu_clock_get_ms(QEMU_CLOCK_REALTIME); } @@ -1878,47 +1927,6 @@ exit: return zero_page; } -static void -update_compress_thread_counts(const CompressParam *param, int bytes_xmit) -{ - if (param->zero_page) { - ram_counters.duplicate++; - } - ram_counters.transferred += bytes_xmit; -} - -static void flush_compressed_data(RAMState *rs) -{ - int idx, len, thread_count; - - if (!migrate_use_compression()) { - return; - } - thread_count = migrate_compress_threads(); - - qemu_mutex_lock(&comp_done_lock); - for (idx = 0; idx < thread_count; idx++) { - while (!comp_param[idx].done) { - qemu_cond_wait(&comp_done_cond, &comp_done_lock); - } - } - qemu_mutex_unlock(&comp_done_lock); - - for (idx = 0; idx < thread_count; idx++) { - qemu_mutex_lock(&comp_param[idx].mutex); - if (!comp_param[idx].quit) { - len = qemu_put_qemu_file(rs->f, comp_param[idx].file); - /* - * it's safe to fetch zero_page without holding comp_done_lock - * as there is no further request submitted to the thread, - * i.e, the thread should be waiting for a request at this point. - */ - update_compress_thread_counts(&comp_param[idx], len); - } - qemu_mutex_unlock(&comp_param[idx].mutex); - } -} - static inline void set_compress_params(CompressParam *param, RAMBlock *block, ram_addr_t offset) {