From patchwork Mon Aug 26 19:53:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fabiano Rosas X-Patchwork-Id: 13778365 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DE885C54731 for ; Mon, 26 Aug 2024 19:54:22 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1sifmj-0007dp-Mx; Mon, 26 Aug 2024 15:53:53 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sifmi-0007Y0-6P for qemu-devel@nongnu.org; Mon, 26 Aug 2024 15:53:52 -0400 Received: from smtp-out2.suse.de ([195.135.223.131]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1sifmg-0002En-6A for qemu-devel@nongnu.org; Mon, 26 Aug 2024 15:53:51 -0400 Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id E43D31F8B2; Mon, 26 Aug 2024 19:53:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1724702029; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5ku2Yclsnnj7YP7PuEDeeWjtItuKjNrIIAnBEqjVWMI=; b=ZoOWavL+z0uyWj6TX7vSPEqd77r4xNQPy0hdJnuZs74nFVGJmuXZyCq17UrdZAdzFS8Nyc TIDvSCXVWhn9ZXzZiPQGqUj8lvkBZGn8o8fa2kfg21RWFp0C5u24sNi0Fk6L44KkEz2mtp rpm3UUls15zIb3CAWVc7DJk5K5kdyfY= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1724702029; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5ku2Yclsnnj7YP7PuEDeeWjtItuKjNrIIAnBEqjVWMI=; b=AwhkdBmZyowG60TYN0lU9Hx41VFXslRh6+9Qdw7DjaV89jDhE5b1ktFCVyxqx49OPUnAEQ l/Geb1KL7IjRlaDQ== Authentication-Results: smtp-out2.suse.de; dkim=pass header.d=suse.de header.s=susede2_rsa header.b=LMK0SF3I; dkim=pass header.d=suse.de header.s=susede2_ed25519 header.b="xUX/sCJg" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1724702028; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5ku2Yclsnnj7YP7PuEDeeWjtItuKjNrIIAnBEqjVWMI=; b=LMK0SF3IVhaF/lJPElccb99mmzXbQRWqKjxLyLbOZ5PmQvh/PXFZzbkFhd9P9cHAMhjl7U yc8WMEG1pBaKKOOHh9BqUBNyOMGG5f0ott2II07Np6y6c3COzudE15ROxthQm8fuoDH5RY EIveb07oCGX/IxL1Qzo+e0mUp7NbQHY= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1724702028; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5ku2Yclsnnj7YP7PuEDeeWjtItuKjNrIIAnBEqjVWMI=; b=xUX/sCJgC/ebrKZkZiQ9HbYuEnrU8t7Mn43pooVmo818MtPP7hbSsWwUPD7I/Wyi5XYjRP 1rFl0+CzHWsObnAA== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 83E7C1398D; Mon, 26 Aug 2024 19:53:47 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id qH6eEkvdzGY5PQAAD6G6ig (envelope-from ); Mon, 26 Aug 2024 19:53:47 +0000 From: Fabiano Rosas To: qemu-devel@nongnu.org Cc: Peter Xu , "Maciej S . Szmigiero" , =?utf-8?q?Philippe_Mat?= =?utf-8?q?hieu-Daud=C3=A9?= Subject: [PATCH v5 12/18] migration/multifd: Replace multifd_send_state->pages with client data Date: Mon, 26 Aug 2024 16:53:16 -0300 Message-Id: <20240826195322.16532-13-farosas@suse.de> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20240826195322.16532-1-farosas@suse.de> References: <20240826195322.16532-1-farosas@suse.de> MIME-Version: 1.0 X-Rspamd-Queue-Id: E43D31F8B2 X-Rspamd-Action: no action X-Spamd-Result: default: False [-3.01 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; MID_CONTAINS_FROM(1.00)[]; R_MISSING_CHARSET(0.50)[]; R_DKIM_ALLOW(-0.20)[suse.de:s=susede2_rsa,suse.de:s=susede2_ed25519]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MIME_GOOD(-0.10)[text/plain]; MX_GOOD(-0.01)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.de:dkim,suse.de:mid,suse.de:email,imap1.dmz-prg2.suse.org:rdns,imap1.dmz-prg2.suse.org:helo]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; FROM_HAS_DN(0.00)[]; TO_DN_SOME(0.00)[]; MIME_TRACE(0.00)[0:+]; TO_MATCH_ENVRCPT_ALL(0.00)[]; FROM_EQ_ENVFROM(0.00)[]; RCVD_TLS_ALL(0.00)[]; DKIM_SIGNED(0.00)[suse.de:s=susede2_rsa,suse.de:s=susede2_ed25519]; FUZZY_BLOCKED(0.00)[rspamd.com]; RCVD_COUNT_TWO(0.00)[2]; DWL_DNSWL_BLOCKED(0.00)[suse.de:dkim]; RCPT_COUNT_THREE(0.00)[4]; DKIM_TRACE(0.00)[suse.de:+] X-Rspamd-Server: rspamd1.dmz-prg2.suse.org Received-SPF: pass client-ip=195.135.223.131; envelope-from=farosas@suse.de; helo=smtp-out2.suse.de X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Multifd currently has a simple scheduling mechanism that distributes work to the various channels by keeping storage space within each channel and an extra space that is given to the client. Each time the client fills the space with data and calls into multifd, that space is given to the next idle channel and a free storage space is taken from the channel and given to client for the next iteration. This means we always need (#multifd_channels + 1) memory slots to operate multifd. This is fine, except that the presence of this one extra memory slot doesn't allow different types of payloads to be processed at the same time in different channels, i.e. the data type of multifd_send_state->pages needs to be the same as p->pages. For each new data type different from MultiFDPage_t that is to be handled, this logic would need to be duplicated by adding new fields to multifd_send_state, to the channels and to multifd_send_pages(). Fix this situation by moving the extra slot into the client and using only the generic type MultiFDSendData in the multifd core. Reviewed-by: Peter Xu Signed-off-by: Fabiano Rosas --- migration/multifd.c | 79 ++++++++++++++++++++++++++------------------- migration/multifd.h | 3 ++ migration/ram.c | 2 ++ 3 files changed, 50 insertions(+), 34 deletions(-) diff --git a/migration/multifd.c b/migration/multifd.c index 3a164c124d..cb7a121eb0 100644 --- a/migration/multifd.c +++ b/migration/multifd.c @@ -49,7 +49,6 @@ typedef struct { struct { MultiFDSendParams *params; - MultiFDSendData *data; /* * Global number of generated multifd packets. * @@ -97,6 +96,8 @@ struct { MultiFDMethods *ops; } *multifd_recv_state; +static MultiFDSendData *multifd_ram_send; + static size_t multifd_ram_payload_size(void) { uint32_t n = multifd_ram_page_count(); @@ -130,6 +131,17 @@ static MultiFDSendData *multifd_send_data_alloc(void) return g_malloc0(size_minus_payload + max_payload_size); } +void multifd_ram_save_setup(void) +{ + multifd_ram_send = multifd_send_data_alloc(); +} + +void multifd_ram_save_cleanup(void) +{ + g_free(multifd_ram_send); + multifd_ram_send = NULL; +} + static bool multifd_use_packets(void) { return !migrate_mapped_ram(); @@ -610,25 +622,20 @@ static void multifd_send_kick_main(MultiFDSendParams *p) } /* - * How we use multifd_send_state->pages and channel->pages? + * multifd_send() works by exchanging the MultiFDSendData object + * provided by the caller with an unused MultiFDSendData object from + * the next channel that is found to be idle. * - * We create a pages for each channel, and a main one. Each time that - * we need to send a batch of pages we interchange the ones between - * multifd_send_state and the channel that is sending it. There are - * two reasons for that: - * - to not have to do so many mallocs during migration - * - to make easier to know what to free at the end of migration + * The channel owns the data until it finishes transmitting and the + * caller owns the empty object until it fills it with data and calls + * this function again. No locking necessary. * - * This way we always know who is the owner of each "pages" struct, - * and we don't need any locking. It belongs to the migration thread - * or to the channel thread. Switching is safe because the migration - * thread is using the channel mutex when changing it, and the channel - * have to had finish with its own, otherwise pending_job can't be - * false. + * Switching is safe because both the migration thread and the channel + * thread have barriers in place to serialize access. * * Returns true if succeed, false otherwise. */ -static bool multifd_send_pages(void) +static bool multifd_send(MultiFDSendData **send_data) { int i; static int next_channel; @@ -669,11 +676,16 @@ static bool multifd_send_pages(void) */ smp_mb_acquire(); - assert(!p->data->u.ram.num); + assert(multifd_payload_empty(p->data)); - tmp = multifd_send_state->data; - multifd_send_state->data = p->data; + /* + * Swap the pointers. The channel gets the client data for + * transferring and the client gets back an unused data slot. + */ + tmp = *send_data; + *send_data = p->data; p->data = tmp; + /* * Making sure p->data is setup before marking pending_job=true. Pairs * with the qatomic_load_acquire() in multifd_send_thread(). @@ -705,7 +717,12 @@ bool multifd_queue_page(RAMBlock *block, ram_addr_t offset) MultiFDPages_t *pages; retry: - pages = &multifd_send_state->data->u.ram; + pages = &multifd_ram_send->u.ram; + + if (multifd_payload_empty(multifd_ram_send)) { + multifd_pages_reset(pages); + multifd_set_payload_type(multifd_ram_send, MULTIFD_PAYLOAD_RAM); + } /* If the queue is empty, we can already enqueue now */ if (multifd_queue_empty(pages)) { @@ -723,7 +740,7 @@ retry: * After flush, always retry. */ if (pages->block != block || multifd_queue_full(pages)) { - if (!multifd_send_pages()) { + if (!multifd_send(&multifd_ram_send)) { return false; } goto retry; @@ -853,8 +870,6 @@ static void multifd_send_cleanup_state(void) qemu_sem_destroy(&multifd_send_state->channels_ready); g_free(multifd_send_state->params); multifd_send_state->params = NULL; - g_free(multifd_send_state->data); - multifd_send_state->data = NULL; g_free(multifd_send_state); multifd_send_state = NULL; } @@ -903,15 +918,14 @@ int multifd_send_sync_main(void) { int i; bool flush_zero_copy; - MultiFDPages_t *pages; if (!migrate_multifd()) { return 0; } - pages = &multifd_send_state->data->u.ram; - if (pages->num) { - if (!multifd_send_pages()) { - error_report("%s: multifd_send_pages fail", __func__); + + if (!multifd_payload_empty(multifd_ram_send)) { + if (!multifd_send(&multifd_ram_send)) { + error_report("%s: multifd_send fail", __func__); return -1; } } @@ -985,13 +999,11 @@ static void *multifd_send_thread(void *opaque) /* * Read pending_job flag before p->data. Pairs with the - * qatomic_store_release() in multifd_send_pages(). + * qatomic_store_release() in multifd_send(). */ if (qatomic_load_acquire(&p->pending_job)) { - MultiFDPages_t *pages = &p->data->u.ram; - p->iovs_num = 0; - assert(pages->num); + assert(!multifd_payload_empty(p->data)); ret = multifd_send_state->ops->send_prepare(p, &local_err); if (ret != 0) { @@ -1014,13 +1026,13 @@ static void *multifd_send_thread(void *opaque) stat64_add(&mig_stats.multifd_bytes, p->next_packet_size + p->packet_len); - multifd_pages_reset(pages); p->next_packet_size = 0; + multifd_set_payload_type(p->data, MULTIFD_PAYLOAD_NONE); /* * Making sure p->data is published before saying "we're * free". Pairs with the smp_mb_acquire() in - * multifd_send_pages(). + * multifd_send(). */ qatomic_store_release(&p->pending_job, false); } else { @@ -1212,7 +1224,6 @@ bool multifd_send_setup(void) thread_count = migrate_multifd_channels(); multifd_send_state = g_malloc0(sizeof(*multifd_send_state)); multifd_send_state->params = g_new0(MultiFDSendParams, thread_count); - multifd_send_state->data = multifd_send_data_alloc(); qemu_sem_init(&multifd_send_state->channels_created, 0); qemu_sem_init(&multifd_send_state->channels_ready, 0); qatomic_set(&multifd_send_state->exiting, 0); diff --git a/migration/multifd.h b/migration/multifd.h index 9175104aea..5fa384d9af 100644 --- a/migration/multifd.h +++ b/migration/multifd.h @@ -267,4 +267,7 @@ static inline uint32_t multifd_ram_page_count(void) { return MULTIFD_PACKET_SIZE / qemu_target_page_size(); } + +void multifd_ram_save_setup(void); +void multifd_ram_save_cleanup(void); #endif diff --git a/migration/ram.c b/migration/ram.c index edec1a2d07..1815b2557b 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -2387,6 +2387,7 @@ static void ram_save_cleanup(void *opaque) ram_bitmaps_destroy(); xbzrle_cleanup(); + multifd_ram_save_cleanup(); ram_state_cleanup(rsp); g_free(migration_ops); migration_ops = NULL; @@ -3058,6 +3059,7 @@ static int ram_save_setup(QEMUFile *f, void *opaque, Error **errp) migration_ops = g_malloc0(sizeof(MigrationOps)); if (migrate_multifd()) { + multifd_ram_save_setup(); migration_ops->ram_save_target_page = ram_save_target_page_multifd; } else { migration_ops->ram_save_target_page = ram_save_target_page_legacy;