From patchwork Tue Mar 27 09:10:36 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 10312577 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id E35E460325 for ; Wed, 28 Mar 2018 09:12:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D911129B23 for ; Wed, 28 Mar 2018 09:12:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CD9F529C01; Wed, 28 Mar 2018 09:12:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.7 required=2.0 tests=BAYES_00, DATE_IN_PAST_12_24, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,FREEMAIL_FROM,RCVD_IN_DNSWL_HI, T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 2A8D929B23 for ; Wed, 28 Mar 2018 09:12:13 +0000 (UTC) Received: from localhost ([::1]:38158 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f177t-00025I-30 for patchwork-qemu-devel@patchwork.kernel.org; Wed, 28 Mar 2018 05:12:13 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:56646) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f176K-0000rF-LJ for qemu-devel@nongnu.org; Wed, 28 Mar 2018 05:10:38 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f176H-0004yP-Vj for qemu-devel@nongnu.org; Wed, 28 Mar 2018 05:10:36 -0400 Received: from mail-pf0-x243.google.com ([2607:f8b0:400e:c00::243]:45875) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1f176H-0004yA-Md for qemu-devel@nongnu.org; Wed, 28 Mar 2018 05:10:33 -0400 Received: by mail-pf0-x243.google.com with SMTP id l27so730936pfk.12 for ; Wed, 28 Mar 2018 02:10:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=WZz3mziASCDosDIFuMNrw4HmHLseGAEClkhPtP4ywK0=; b=mBY8rEM9R++0xUsCYTCa5b0glqtnt0h/dbPNm0RrefA5omIgT+8QvtWIH8OD0gPXW3 SvyxQpfzuVRCNjcDYG8oweqn0TlIJFJ3q4R3zOTF5peD/GRkQanU9nWUkuWftI3i1jTy p05IDvl+yPQGxqoGFFzITECbIePmxm2KrEDN77fVEURnBTcUpCprxMs5kQ5HqDC1uI31 DE6VhvkECzOUZQUK6S+K7pX9hGqxX/fbsap2SD5Hnhl8ocyVoaaL7P2jwyQlRwu9iqzx QtldUURHc4HzjyuKH2IIdPEyGfUaa7Yp2qH56GYRxmZ23TwcEPz+NVXCbK6uDTBJ49S+ qSHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=WZz3mziASCDosDIFuMNrw4HmHLseGAEClkhPtP4ywK0=; b=tSar46Jwpp63rNQMnD6cE55Yr6WzuvOljMnbrun4HBwrzLaqd/Dpghxj52iwFFvLzi 4JS+hOaiDGU9XeDuHljyzVNAr6sL+kA91gM4Ui9Det9OYvnmv+yiUMPMm2pHdVXoZjNZ e3qfijU8CzS/tbQgYlEmiGuFMK24RKzpNjiCJFgMNEszqdhT0xwP/0wttXoOVOhPlU5G pIOWP6hIdAB5il2QHa+hzHIiM/ZwrYWejAqySRnTZWHu1dkveCZEDaL7ZqsDAY24Wy9A GLUtLMZd5TXBEhxLe4NyqjTSXiPaSahdEG3Wg3Umerle6qZqONfv7GpdVefLuqgzmYlz 73Bg== X-Gm-Message-State: AElRT7FfLc9DUg9fzmsuPUr/dt/PsiW7toH1aYK4MrkT5Jq9lIIEpU8E KY5GA5i7VvhUR01NSzzJfWU= X-Google-Smtp-Source: AIpwx49jTVkP1Ep+Qc21md4hSDuongCtBMMGvm81cliiWCGBmylyg43ROHtJqziCTRS/fTUXVpc3dg== X-Received: by 10.98.93.149 with SMTP id n21mr2318913pfj.222.1522228232815; Wed, 28 Mar 2018 02:10:32 -0700 (PDT) Received: from localhost.localdomain ([203.205.141.38]) by smtp.gmail.com with ESMTPSA id q9sm5590648pgs.89.2018.03.28.02.10.29 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 28 Mar 2018 02:10:32 -0700 (PDT) From: guangrong.xiao@gmail.com X-Google-Original-From: xiaoguangrong@tencent.com To: pbonzini@redhat.com, mst@redhat.com, mtosatti@redhat.com Date: Tue, 27 Mar 2018 17:10:36 +0800 Message-Id: <20180327091043.30220-4-xiaoguangrong@tencent.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180327091043.30220-1-xiaoguangrong@tencent.com> References: <20180327091043.30220-1-xiaoguangrong@tencent.com> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2607:f8b0:400e:c00::243 Subject: [Qemu-devel] [PATCH v2 03/10] migration: stop decompression to allocate and free memory frequently X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kvm@vger.kernel.org, Xiao Guangrong , qemu-devel@nongnu.org, peterx@redhat.com, dgilbert@redhat.com, wei.w.wang@intel.com, jiang.biao2@zte.com.cn Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Xiao Guangrong Current code uses uncompress() to decompress memory which manages memory internally, that causes huge memory is allocated and freed very frequently, more worse, frequently returning memory to kernel will flush TLBs So, we maintain the memory by ourselves and reuse it for each decompression Reviewed-by: Jiang Biao Signed-off-by: Xiao Guangrong --- migration/ram.c | 110 ++++++++++++++++++++++++++++++++++++++++---------------- 1 file changed, 80 insertions(+), 30 deletions(-) diff --git a/migration/ram.c b/migration/ram.c index e043a192e1..6b699650ca 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -281,6 +281,7 @@ struct DecompressParam { void *des; uint8_t *compbuf; int len; + z_stream stream; }; typedef struct DecompressParam DecompressParam; @@ -2526,6 +2527,31 @@ void ram_handle_compressed(void *host, uint8_t ch, uint64_t size) } } +/* return the size after decompression, or negative value on error */ +static int +qemu_uncompress_data(z_stream *stream, uint8_t *dest, size_t dest_len, + const uint8_t *source, size_t source_len) +{ + int err; + + err = inflateReset(stream); + if (err != Z_OK) { + return -1; + } + + stream->avail_in = source_len; + stream->next_in = (uint8_t *)source; + stream->avail_out = dest_len; + stream->next_out = dest; + + err = inflate(stream, Z_NO_FLUSH); + if (err != Z_STREAM_END) { + return -1; + } + + return stream->total_out; +} + static void *do_data_decompress(void *opaque) { DecompressParam *param = opaque; @@ -2542,13 +2568,13 @@ static void *do_data_decompress(void *opaque) qemu_mutex_unlock(¶m->mutex); pagesize = TARGET_PAGE_SIZE; - /* uncompress() will return failed in some case, especially - * when the page is dirted when doing the compression, it's - * not a problem because the dirty page will be retransferred + /* qemu_uncompress_data() will return failed in some case, + * especially when the page is dirtied when doing the compression, + * it's not a problem because the dirty page will be retransferred * and uncompress() won't break the data in other pages. */ - uncompress((Bytef *)des, &pagesize, - (const Bytef *)param->compbuf, len); + qemu_uncompress_data(¶m->stream, des, pagesize, param->compbuf, + len); qemu_mutex_lock(&decomp_done_lock); param->done = true; @@ -2583,30 +2609,6 @@ static void wait_for_decompress_done(void) qemu_mutex_unlock(&decomp_done_lock); } -static void compress_threads_load_setup(void) -{ - int i, thread_count; - - if (!migrate_use_compression()) { - return; - } - thread_count = migrate_decompress_threads(); - decompress_threads = g_new0(QemuThread, thread_count); - decomp_param = g_new0(DecompressParam, thread_count); - qemu_mutex_init(&decomp_done_lock); - qemu_cond_init(&decomp_done_cond); - for (i = 0; i < thread_count; i++) { - qemu_mutex_init(&decomp_param[i].mutex); - qemu_cond_init(&decomp_param[i].cond); - decomp_param[i].compbuf = g_malloc0(compressBound(TARGET_PAGE_SIZE)); - decomp_param[i].done = true; - decomp_param[i].quit = false; - qemu_thread_create(decompress_threads + i, "decompress", - do_data_decompress, decomp_param + i, - QEMU_THREAD_JOINABLE); - } -} - static void compress_threads_load_cleanup(void) { int i, thread_count; @@ -2616,16 +2618,27 @@ static void compress_threads_load_cleanup(void) } thread_count = migrate_decompress_threads(); for (i = 0; i < thread_count; i++) { + /* see the comments in compress_threads_save_cleanup() */ + if (!decomp_param[i].stream.opaque) { + break; + } + qemu_mutex_lock(&decomp_param[i].mutex); decomp_param[i].quit = true; qemu_cond_signal(&decomp_param[i].cond); qemu_mutex_unlock(&decomp_param[i].mutex); } for (i = 0; i < thread_count; i++) { + if (!decomp_param[i].stream.opaque) { + break; + } + qemu_thread_join(decompress_threads + i); qemu_mutex_destroy(&decomp_param[i].mutex); qemu_cond_destroy(&decomp_param[i].cond); g_free(decomp_param[i].compbuf); + inflateEnd(&decomp_param[i].stream); + decomp_param[i].stream.opaque = NULL; } g_free(decompress_threads); g_free(decomp_param); @@ -2633,6 +2646,40 @@ static void compress_threads_load_cleanup(void) decomp_param = NULL; } +static int compress_threads_load_setup(void) +{ + int i, thread_count; + + if (!migrate_use_compression()) { + return 0; + } + + thread_count = migrate_decompress_threads(); + decompress_threads = g_new0(QemuThread, thread_count); + decomp_param = g_new0(DecompressParam, thread_count); + qemu_mutex_init(&decomp_done_lock); + qemu_cond_init(&decomp_done_cond); + for (i = 0; i < thread_count; i++) { + if (inflateInit(&decomp_param[i].stream) != Z_OK) { + goto exit; + } + decomp_param[i].stream.opaque = &decomp_param[i]; + + qemu_mutex_init(&decomp_param[i].mutex); + qemu_cond_init(&decomp_param[i].cond); + decomp_param[i].compbuf = g_malloc0(compressBound(TARGET_PAGE_SIZE)); + decomp_param[i].done = true; + decomp_param[i].quit = false; + qemu_thread_create(decompress_threads + i, "decompress", + do_data_decompress, decomp_param + i, + QEMU_THREAD_JOINABLE); + } + return 0; +exit: + compress_threads_load_cleanup(); + return -1; +} + static void decompress_data_with_multi_threads(QEMUFile *f, void *host, int len) { @@ -2672,8 +2719,11 @@ static void decompress_data_with_multi_threads(QEMUFile *f, */ static int ram_load_setup(QEMUFile *f, void *opaque) { + if (compress_threads_load_setup()) { + return -1; + } + xbzrle_load_setup(); - compress_threads_load_setup(); ramblock_recv_map_init(); return 0; }