From patchwork Fri Feb 10 22:27:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Stefano Stabellini X-Patchwork-Id: 13136548 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5B16BC05027 for ; Fri, 10 Feb 2023 22:28:08 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pQbrp-0002FQ-V8; Fri, 10 Feb 2023 17:27:41 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pQbro-0002Eh-AJ for qemu-devel@nongnu.org; Fri, 10 Feb 2023 17:27:40 -0500 Received: from ams.source.kernel.org ([145.40.68.75]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pQbrm-0007hz-1t for qemu-devel@nongnu.org; Fri, 10 Feb 2023 17:27:40 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id EE4B6B825FD; Fri, 10 Feb 2023 22:27:35 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id AE7B8C4339B; Fri, 10 Feb 2023 22:27:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1676068055; bh=5GTiFvLhRRfUKC0J3OLKtptMqNJZ2Qh9y/l3bmnCVvw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=j8Ek/pUBogyRZVqEqOhOp01F3EFLdaGwnpfKzGZYe271bFeel+V9n2UtY2NEYk202 ydzZMWP3gopHCZiWK0Ddu78nfRlmcJhsg7HhO4gS0zpl84dtA4WDONfO6Wlo4pcC+H g2EdYDzNQCtkpqN+mc8nOJdCGMXbYH5kFh1/QnYthXH3BLhk/mZ/erLAMYcg3kIYh8 BgcLhbsxHvWu2uD6cbf8vt6Uk5aet0GEUooTGA52yuXlQu32ikN69zAWh1Ig+/2CQl SGUlnVwQl/qrlOTgsWvhorXDLdEgrjPKSDVKwz2NtQA89Y0NB4IWIzKNmXuVUSQ2Vp 7jHX/MHUkL/ig== From: Stefano Stabellini To: qemu-devel@nongnu.org Cc: sstabellini@kernel.org, peter.maydell@linaro.org, Stefano Stabellini , Vikram Garhwal , Paul Durrant , =?utf-8?q?Alex_Benn=C3=A9e?= Subject: [PULL 03/10] hw/i386/xen/xen-hvm: move x86-specific fields out of XenIOState Date: Fri, 10 Feb 2023 14:27:22 -0800 Message-Id: <20230210222729.957168-3-sstabellini@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Received-SPF: pass client-ip=145.40.68.75; envelope-from=sstabellini@kernel.org; helo=ams.source.kernel.org X-Spam_score_int: -70 X-Spam_score: -7.1 X-Spam_bar: ------- X-Spam_report: (-7.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_HI=-5, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org From: Stefano Stabellini In preparation to moving most of xen-hvm code to an arch-neutral location, move: - shared_vmport_page - log_for_dirtybit - dirty_bitmap - suspend - wakeup out of XenIOState struct as these are only used on x86, especially the ones related to dirty logging. Updated XenIOState can be used for both aarch64 and x86. Also, remove free_phys_offset as it was unused. Signed-off-by: Stefano Stabellini Signed-off-by: Vikram Garhwal Reviewed-by: Paul Durrant Reviewed-by: Alex Bennée --- hw/i386/xen/xen-hvm.c | 58 ++++++++++++++++++++----------------------- 1 file changed, 27 insertions(+), 31 deletions(-) diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c index 1fba0e0ae1..06c446e7be 100644 --- a/hw/i386/xen/xen-hvm.c +++ b/hw/i386/xen/xen-hvm.c @@ -73,6 +73,7 @@ struct shared_vmport_iopage { }; typedef struct shared_vmport_iopage shared_vmport_iopage_t; #endif +static shared_vmport_iopage_t *shared_vmport_page; static inline uint32_t xen_vcpu_eport(shared_iopage_t *shared_page, int i) { @@ -95,6 +96,11 @@ typedef struct XenPhysmap { } XenPhysmap; static QLIST_HEAD(, XenPhysmap) xen_physmap; +static const XenPhysmap *log_for_dirtybit; +/* Buffer used by xen_sync_dirty_bitmap */ +static unsigned long *dirty_bitmap; +static Notifier suspend; +static Notifier wakeup; typedef struct XenPciDevice { PCIDevice *pci_dev; @@ -105,7 +111,6 @@ typedef struct XenPciDevice { typedef struct XenIOState { ioservid_t ioservid; shared_iopage_t *shared_page; - shared_vmport_iopage_t *shared_vmport_page; buffered_iopage_t *buffered_io_page; xenforeignmemory_resource_handle *fres; QEMUTimer *buffered_io_timer; @@ -125,14 +130,8 @@ typedef struct XenIOState { MemoryListener io_listener; QLIST_HEAD(, XenPciDevice) dev_list; DeviceListener device_listener; - hwaddr free_phys_offset; - const XenPhysmap *log_for_dirtybit; - /* Buffer used by xen_sync_dirty_bitmap */ - unsigned long *dirty_bitmap; Notifier exit; - Notifier suspend; - Notifier wakeup; } XenIOState; /* Xen specific function for piix pci */ @@ -462,10 +461,10 @@ static int xen_remove_from_physmap(XenIOState *state, } QLIST_REMOVE(physmap, list); - if (state->log_for_dirtybit == physmap) { - state->log_for_dirtybit = NULL; - g_free(state->dirty_bitmap); - state->dirty_bitmap = NULL; + if (log_for_dirtybit == physmap) { + log_for_dirtybit = NULL; + g_free(dirty_bitmap); + dirty_bitmap = NULL; } g_free(physmap); @@ -626,16 +625,16 @@ static void xen_sync_dirty_bitmap(XenIOState *state, return; } - if (state->log_for_dirtybit == NULL) { - state->log_for_dirtybit = physmap; - state->dirty_bitmap = g_new(unsigned long, bitmap_size); - } else if (state->log_for_dirtybit != physmap) { + if (log_for_dirtybit == NULL) { + log_for_dirtybit = physmap; + dirty_bitmap = g_new(unsigned long, bitmap_size); + } else if (log_for_dirtybit != physmap) { /* Only one range for dirty bitmap can be tracked. */ return; } rc = xen_track_dirty_vram(xen_domid, start_addr >> TARGET_PAGE_BITS, - npages, state->dirty_bitmap); + npages, dirty_bitmap); if (rc < 0) { #ifndef ENODATA #define ENODATA ENOENT @@ -650,7 +649,7 @@ static void xen_sync_dirty_bitmap(XenIOState *state, } for (i = 0; i < bitmap_size; i++) { - unsigned long map = state->dirty_bitmap[i]; + unsigned long map = dirty_bitmap[i]; while (map != 0) { j = ctzl(map); map &= ~(1ul << j); @@ -676,12 +675,10 @@ static void xen_log_start(MemoryListener *listener, static void xen_log_stop(MemoryListener *listener, MemoryRegionSection *section, int old, int new) { - XenIOState *state = container_of(listener, XenIOState, memory_listener); - if (old & ~new & (1 << DIRTY_MEMORY_VGA)) { - state->log_for_dirtybit = NULL; - g_free(state->dirty_bitmap); - state->dirty_bitmap = NULL; + log_for_dirtybit = NULL; + g_free(dirty_bitmap); + dirty_bitmap = NULL; /* Disable dirty bit tracking */ xen_track_dirty_vram(xen_domid, 0, 0, NULL); } @@ -1021,9 +1018,9 @@ static void handle_vmport_ioreq(XenIOState *state, ioreq_t *req) { vmware_regs_t *vmport_regs; - assert(state->shared_vmport_page); + assert(shared_vmport_page); vmport_regs = - &state->shared_vmport_page->vcpu_vmport_regs[state->send_vcpu]; + &shared_vmport_page->vcpu_vmport_regs[state->send_vcpu]; QEMU_BUILD_BUG_ON(sizeof(*req) < sizeof(*vmport_regs)); current_cpu = state->cpu_by_vcpu_id[state->send_vcpu]; @@ -1468,7 +1465,6 @@ void xen_hvm_init_pc(PCMachineState *pcms, MemoryRegion **ram_memory) state->memory_listener = xen_memory_listener; memory_listener_register(&state->memory_listener, &address_space_memory); - state->log_for_dirtybit = NULL; state->io_listener = xen_io_listener; memory_listener_register(&state->io_listener, &address_space_io); @@ -1489,19 +1485,19 @@ void xen_hvm_init_pc(PCMachineState *pcms, MemoryRegion **ram_memory) QLIST_INIT(&xen_physmap); xen_read_physmap(state); - state->suspend.notify = xen_suspend_notifier; - qemu_register_suspend_notifier(&state->suspend); + suspend.notify = xen_suspend_notifier; + qemu_register_suspend_notifier(&suspend); - state->wakeup.notify = xen_wakeup_notifier; - qemu_register_wakeup_notifier(&state->wakeup); + wakeup.notify = xen_wakeup_notifier; + qemu_register_wakeup_notifier(&wakeup); rc = xen_get_vmport_regs_pfn(xen_xc, xen_domid, &ioreq_pfn); if (!rc) { DPRINTF("shared vmport page at pfn %lx\n", ioreq_pfn); - state->shared_vmport_page = + shared_vmport_page = xenforeignmemory_map(xen_fmem, xen_domid, PROT_READ|PROT_WRITE, 1, &ioreq_pfn, NULL); - if (state->shared_vmport_page == NULL) { + if (shared_vmport_page == NULL) { error_report("map shared vmport IO page returned error %d handle=%p", errno, xen_xc); goto err;