From patchwork Wed Jul 20 11:19:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Dr. David Alan Gilbert" X-Patchwork-Id: 12923831 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id ABAD1C433EF for ; Wed, 20 Jul 2022 11:31:49 +0000 (UTC) Received: from localhost ([::1]:55312 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1oE7vg-0000cp-Jg for qemu-devel@archiver.kernel.org; Wed, 20 Jul 2022 07:31:48 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:58782) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oE7kF-0004l2-Un for qemu-devel@nongnu.org; Wed, 20 Jul 2022 07:19:59 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:46421) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oE7kA-0000Au-MS for qemu-devel@nongnu.org; Wed, 20 Jul 2022 07:19:59 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1658315994; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Ybensyyq3g6dtRkciiiYV2D/JYffiJMDJzISqSyrbq0=; b=SgTFR51kEjoTfWQt/rwy5OKAvloGsXsHr0hXefWqC71ja6q0FyquO4t7BzEds8F74xGAUF NJYszhRntqPAG9r33XClipPuWvAK855fe9ESDJTIwRxb3ZouvogybXyspokA1NDlZLjZr6 ApKI9TD8U/7Bv0ZVbGs9tGPktbDyQq8= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-48-8pA9CoI1MlCPy_sLMab7qA-1; Wed, 20 Jul 2022 07:19:50 -0400 X-MC-Unique: 8pA9CoI1MlCPy_sLMab7qA-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 9A3151C288C0; Wed, 20 Jul 2022 11:19:50 +0000 (UTC) Received: from dgilbert-t580.localhost (unknown [10.33.36.158]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9DBDE2166B26; Wed, 20 Jul 2022 11:19:49 +0000 (UTC) From: "Dr. David Alan Gilbert (git)" To: qemu-devel@nongnu.org, leobras@redhat.com, quintela@redhat.com, berrange@redhat.com, peterx@redhat.com, iii@linux.ibm.com, huangy81@chinatelecom.cn Subject: [PULL 15/30] migration: Add property x-postcopy-preempt-break-huge Date: Wed, 20 Jul 2022 12:19:11 +0100 Message-Id: <20220720111926.107055-16-dgilbert@redhat.com> In-Reply-To: <20220720111926.107055-1-dgilbert@redhat.com> References: <20220720111926.107055-1-dgilbert@redhat.com> MIME-Version: 1.0 Content-type: text/plain X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6 Received-SPF: pass client-ip=170.10.133.124; envelope-from=dgilbert@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -21 X-Spam_score: -2.2 X-Spam_bar: -- X-Spam_report: (-2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.082, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" From: Peter Xu Add a property field that can conditionally disable the "break sending huge page" behavior in postcopy preemption. By default it's enabled. It should only be used for debugging purposes, and we should never remove the "x-" prefix. Reviewed-by: Dr. David Alan Gilbert Reviewed-by: Manish Mishra Signed-off-by: Peter Xu Message-Id: <20220707185511.27366-1-peterx@redhat.com> Signed-off-by: Dr. David Alan Gilbert --- migration/migration.c | 2 ++ migration/migration.h | 7 +++++++ migration/ram.c | 7 +++++++ 3 files changed, 16 insertions(+) diff --git a/migration/migration.c b/migration/migration.c index 427d4de185..864164ad96 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -4363,6 +4363,8 @@ static Property migration_properties[] = { DEFINE_PROP_SIZE("announce-step", MigrationState, parameters.announce_step, DEFAULT_MIGRATE_ANNOUNCE_STEP), + DEFINE_PROP_BOOL("x-postcopy-preempt-break-huge", MigrationState, + postcopy_preempt_break_huge, true), /* Migration capabilities */ DEFINE_PROP_MIG_CAP("x-xbzrle", MIGRATION_CAPABILITY_XBZRLE), diff --git a/migration/migration.h b/migration/migration.h index ae4ffd3454..cdad8aceaa 100644 --- a/migration/migration.h +++ b/migration/migration.h @@ -340,6 +340,13 @@ struct MigrationState { bool send_configuration; /* Whether we send section footer during migration */ bool send_section_footer; + /* + * Whether we allow break sending huge pages when postcopy preempt is + * enabled. When disabled, we won't interrupt precopy within sending a + * host huge page, which is the old behavior of vanilla postcopy. + * NOTE: this parameter is ignored if postcopy preempt is not enabled. + */ + bool postcopy_preempt_break_huge; /* Needed by postcopy-pause state */ QemuSemaphore postcopy_pause_sem; diff --git a/migration/ram.c b/migration/ram.c index 65b08c4edb..7cbe9c310d 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -2266,11 +2266,18 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss) static bool postcopy_needs_preempt(RAMState *rs, PageSearchStatus *pss) { + MigrationState *ms = migrate_get_current(); + /* Not enabled eager preempt? Then never do that. */ if (!migrate_postcopy_preempt()) { return false; } + /* If the user explicitly disabled breaking of huge page, skip */ + if (!ms->postcopy_preempt_break_huge) { + return false; + } + /* If the ramblock we're sending is a small page? Never bother. */ if (qemu_ram_pagesize(pss->block) == TARGET_PAGE_SIZE) { return false;