From patchwork Tue Aug 21 08:10:26 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 10571203 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C15201390 for ; Tue, 21 Aug 2018 08:15:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B3C2427F91 for ; Tue, 21 Aug 2018 08:15:44 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A6E8D29D76; Tue, 21 Aug 2018 08:15:44 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 145F527F91 for ; Tue, 21 Aug 2018 08:15:43 +0000 (UTC) Received: from localhost ([::1]:51489 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fs1pG-0000BB-Ls for patchwork-qemu-devel@patchwork.kernel.org; Tue, 21 Aug 2018 04:15:42 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:47267) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fs1ks-000409-MB for qemu-devel@nongnu.org; Tue, 21 Aug 2018 04:11:11 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1fs1kr-0004lD-Ko for qemu-devel@nongnu.org; Tue, 21 Aug 2018 04:11:10 -0400 Received: from mail-pl0-x236.google.com ([2607:f8b0:400e:c01::236]:36713) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1fs1kr-0004jb-Ay for qemu-devel@nongnu.org; Tue, 21 Aug 2018 04:11:09 -0400 Received: by mail-pl0-x236.google.com with SMTP id e11-v6so8444796plb.3 for ; Tue, 21 Aug 2018 01:11:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=JptWhENmTc9OgHjeNPCYSrWFfpBwn6ZucJMoCRDXpKw=; b=YTlNSL6+y8M3dZXnQ3Ea+fPMUj2ttW1Gld9tV8j+IP6udSBP5bCuyx67p5IhQxtLxt sHZ1lKUbLz9fSNB/1NP8sinOzRO9DHO9JDnCrbo2ItWeMHGbcQykuRbuZm6AKO01beys p8TaVeBY9S4eJ7aRV+budiB/ndJfX0xIDKeWSmob19BpmXP4Nv92zP15sLkP1O5K9s1J jesHlojGitY7QqT5NoGidRcPm/UyZyZfUQW6jVYm1wpOplm0+U7BFNYdRqFNcLyE9IZC uAQM9w4gkoLpPatM0x1PZbNEoH37R3sHsQGxxbp7VBNV8onbWkcD3JT5IQ7aK44vE5xG YZyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=JptWhENmTc9OgHjeNPCYSrWFfpBwn6ZucJMoCRDXpKw=; b=h3Io0+QvJu9n8UZiXjZEJQFW9mDhjpmD5mXy8wdsXiUmNSujivlaV59CLbRlhZQgx5 ehNKDqBIhNQ9S6VtZ5YFf7+Wpp0r5xyUMv4v/KwkCaa6+R1KOZPdOaSP3mIvz2BQ6tbq 7vDPJZymE4BcHklVeQPJxoLudBsgRjI7bJf/S7XxLTzzaNcjRcYJMRVc2M9dhhdaydS1 FXAmpaXLFu1Zg3esf17ctQfUu4Ju3y+auxlLUBzYdJMS/pS+BkazJL953vLx5hoQ22KV d08ztnnM++XWW9AIOXERvCJ1GQ2yNvsckbmlSD/NU6I45SbSS65K1LG7TqGeTI3Vu2vo rUow== X-Gm-Message-State: AOUpUlEEbqKozkgxcIDmLtLF69cS2UuG9gO6LwZZ/R8yNNRuSP6uGYMy nmHn8wdEC+lK2IqIFqhhTfgboPLn X-Google-Smtp-Source: AA+uWPzDPpo377nBrXOXhHDgJMBPFIMn9RiO1cXFIxBg44n1sJtLN1Ur4YOBuG1bK5GOvtp6pDE4Aw== X-Received: by 2002:a17:902:b20d:: with SMTP id t13-v6mr8266086plr.107.1534839068538; Tue, 21 Aug 2018 01:11:08 -0700 (PDT) Received: from localhost.localdomain ([203.205.141.40]) by smtp.gmail.com with ESMTPSA id r64-v6sm20644023pfk.157.2018.08.21.01.11.05 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 21 Aug 2018 01:11:08 -0700 (PDT) From: guangrong.xiao@gmail.com X-Google-Original-From: xiaoguangrong@tencent.com To: pbonzini@redhat.com, mst@redhat.com, mtosatti@redhat.com Date: Tue, 21 Aug 2018 16:10:26 +0800 Message-Id: <20180821081029.26121-8-xiaoguangrong@tencent.com> X-Mailer: git-send-email 2.14.4 In-Reply-To: <20180821081029.26121-1-xiaoguangrong@tencent.com> References: <20180821081029.26121-1-xiaoguangrong@tencent.com> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2607:f8b0:400e:c01::236 Subject: [Qemu-devel] [PATCH v4 07/10] migration: do not flush_compressed_data at the end of each iteration X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kvm@vger.kernel.org, Xiao Guangrong , qemu-devel@nongnu.org, peterx@redhat.com, dgilbert@redhat.com, wei.w.wang@intel.com, jiang.biao2@zte.com.cn Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Xiao Guangrong flush_compressed_data() needs to wait all compression threads to finish their work, after that all threads are free until the migration feeds new request to them, reducing its call can improve the throughput and use CPU resource more effectively We do not need to flush all threads at the end of iteration, the data can be kept locally until the memory block is changed or memory migration starts over in that case we will meet a dirtied page which may still exists in compression threads's ring Signed-off-by: Xiao Guangrong --- migration/ram.c | 90 +++++++++++++++++++++++++++++++-------------------------- 1 file changed, 49 insertions(+), 41 deletions(-) diff --git a/migration/ram.c b/migration/ram.c index 99ecf9b315..1d54285501 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -1602,6 +1602,47 @@ static void migration_update_rates(RAMState *rs, int64_t end_time) } } +static void +update_compress_thread_counts(const CompressParam *param, int bytes_xmit) +{ + if (param->zero_page) { + ram_counters.duplicate++; + } + ram_counters.transferred += bytes_xmit; +} + +static void flush_compressed_data(RAMState *rs) +{ + int idx, len, thread_count; + + if (!migrate_use_compression()) { + return; + } + thread_count = migrate_compress_threads(); + + qemu_mutex_lock(&comp_done_lock); + for (idx = 0; idx < thread_count; idx++) { + while (!comp_param[idx].done) { + qemu_cond_wait(&comp_done_cond, &comp_done_lock); + } + } + qemu_mutex_unlock(&comp_done_lock); + + for (idx = 0; idx < thread_count; idx++) { + qemu_mutex_lock(&comp_param[idx].mutex); + if (!comp_param[idx].quit) { + len = qemu_put_qemu_file(rs->f, comp_param[idx].file); + /* + * it's safe to fetch zero_page without holding comp_done_lock + * as there is no further request submitted to the thread, + * i.e, the thread should be waiting for a request at this point. + */ + update_compress_thread_counts(&comp_param[idx], len); + } + qemu_mutex_unlock(&comp_param[idx].mutex); + } +} + static void migration_bitmap_sync(RAMState *rs) { RAMBlock *block; @@ -1610,6 +1651,14 @@ static void migration_bitmap_sync(RAMState *rs) ram_counters.dirty_sync_count++; + /* + * if memory migration starts over, we will meet a dirtied page which + * may still exists in compression threads's ring, so we should flush + * the compressed data to make sure the new page is not overwritten by + * the old one in the destination. + */ + flush_compressed_data(rs); + if (!rs->time_last_bitmap_sync) { rs->time_last_bitmap_sync = qemu_clock_get_ms(QEMU_CLOCK_REALTIME); } @@ -1878,47 +1927,6 @@ exit: return zero_page; } -static void -update_compress_thread_counts(const CompressParam *param, int bytes_xmit) -{ - if (param->zero_page) { - ram_counters.duplicate++; - } - ram_counters.transferred += bytes_xmit; -} - -static void flush_compressed_data(RAMState *rs) -{ - int idx, len, thread_count; - - if (!migrate_use_compression()) { - return; - } - thread_count = migrate_compress_threads(); - - qemu_mutex_lock(&comp_done_lock); - for (idx = 0; idx < thread_count; idx++) { - while (!comp_param[idx].done) { - qemu_cond_wait(&comp_done_cond, &comp_done_lock); - } - } - qemu_mutex_unlock(&comp_done_lock); - - for (idx = 0; idx < thread_count; idx++) { - qemu_mutex_lock(&comp_param[idx].mutex); - if (!comp_param[idx].quit) { - len = qemu_put_qemu_file(rs->f, comp_param[idx].file); - /* - * it's safe to fetch zero_page without holding comp_done_lock - * as there is no further request submitted to the thread, - * i.e, the thread should be waiting for a request at this point. - */ - update_compress_thread_counts(&comp_param[idx], len); - } - qemu_mutex_unlock(&comp_param[idx].mutex); - } -} - static inline void set_compress_params(CompressParam *param, RAMBlock *block, ram_addr_t offset) {