From patchwork Wed Nov 15 11:26:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergiy Kibrik X-Patchwork-Id: 13456559 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BDEBFC47074 for ; Wed, 15 Nov 2023 11:26:35 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.633570.988499 (Exim 4.92) (envelope-from ) id 1r3E2K-0004Fi-Ub; Wed, 15 Nov 2023 11:26:24 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 633570.988499; Wed, 15 Nov 2023 11:26:24 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1r3E2K-0004Fb-S2; Wed, 15 Nov 2023 11:26:24 +0000 Received: by outflank-mailman (input) for mailman id 633570; Wed, 15 Nov 2023 11:26:22 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1r3E2I-00040Y-Sb for xen-devel@lists.xenproject.org; Wed, 15 Nov 2023 11:26:22 +0000 Received: from pb-smtp2.pobox.com (pb-smtp2.pobox.com [64.147.108.71]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id cd496433-83a9-11ee-98db-6d05b1d4d9a1; Wed, 15 Nov 2023 12:26:22 +0100 (CET) Received: from pb-smtp2.pobox.com (unknown [127.0.0.1]) by pb-smtp2.pobox.com (Postfix) with ESMTP id D19B71C1C52; Wed, 15 Nov 2023 06:26:20 -0500 (EST) (envelope-from sakib@darkstar.site) Received: from pb-smtp2.nyi.icgroup.com (unknown [127.0.0.1]) by pb-smtp2.pobox.com (Postfix) with ESMTP id C81E61C1C51; Wed, 15 Nov 2023 06:26:20 -0500 (EST) (envelope-from sakib@darkstar.site) Received: from localhost (unknown [185.130.54.109]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by pb-smtp2.pobox.com (Postfix) with ESMTPSA id 048051C1C50; Wed, 15 Nov 2023 06:26:19 -0500 (EST) (envelope-from sakib@darkstar.site) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: cd496433-83a9-11ee-98db-6d05b1d4d9a1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; s=sasl; bh=JxaNh5W5NVw14x6LTlh4LsUYY sXdhWV9cMoMNLEDhSM=; b=pjR8CoD4gwzYJAPi0xYy/o769TT/+yKq/bCNtT9By aLAcktWXb25mxP4oFBpKdnLOt5g1TnxSjbXc78oMezBK7otKOgOdXhPEtWCh3uCL rOksX+7714pQMl0D8G7Fa3/sQHm0dfku1gmUz+GD4voHxJaPC3LJrFiqEksV2R8X VM= From: Sergiy Kibrik To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Wei Liu , Anthony PERARD , Juergen Gross , Sergiy Kibrik Subject: [RFC PATCH 1/6] libxl: Pass max_vcpus to Qemu in case of PVH domain (Arm) as well Date: Wed, 15 Nov 2023 13:26:06 +0200 Message-Id: <20231115112611.3865905-2-Sergiy_Kibrik@epam.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231115112611.3865905-1-Sergiy_Kibrik@epam.com> References: <20231115112611.3865905-1-Sergiy_Kibrik@epam.com> MIME-Version: 1.0 X-Pobox-Relay-ID: CC99A9D2-83A9-11EE-8DEE-25B3960A682E-90055647!pb-smtp2.pobox.com From: Oleksandr Tyshchenko The number of vCPUs used for the IOREQ configuration (machine->smp.cpus) should really match the system value as for each vCPU we setup a dedicated evtchn for the communication with Xen at the runtime. This is needed for the IOREQ to be properly configured and work if the involved domain has more than one vCPU assigned. Note that Qemu should set mc->max_cpus to GUEST_MAX_VCPUS in xen_arm_machine_class_init(). Signed-off-by: Oleksandr Tyshchenko Signed-off-by: Sergiy Kibrik --- tools/libs/light/libxl_dm.c | 28 ++++++++++++++++------------ 1 file changed, 16 insertions(+), 12 deletions(-) diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c index 14b593110f..0b2548d35b 100644 --- a/tools/libs/light/libxl_dm.c +++ b/tools/libs/light/libxl_dm.c @@ -1553,18 +1553,6 @@ static int libxl__build_device_model_args_new(libxl__gc *gc, if (!libxl__acpi_defbool_val(b_info)) { flexarray_append(dm_args, "-no-acpi"); } - if (b_info->max_vcpus > 1) { - flexarray_append(dm_args, "-smp"); - if (b_info->avail_vcpus.size) { - int nr_set_cpus = 0; - nr_set_cpus = libxl_bitmap_count_set(&b_info->avail_vcpus); - - flexarray_append(dm_args, GCSPRINTF("%d,maxcpus=%d", - nr_set_cpus, - b_info->max_vcpus)); - } else - flexarray_append(dm_args, GCSPRINTF("%d", b_info->max_vcpus)); - } for (i = 0; i < num_nics; i++) { if (nics[i].nictype == LIBXL_NIC_TYPE_VIF_IOEMU) { char *smac = GCSPRINTF(LIBXL_MAC_FMT, @@ -1800,6 +1788,22 @@ static int libxl__build_device_model_args_new(libxl__gc *gc, for (i = 0; b_info->extra && b_info->extra[i] != NULL; i++) flexarray_append(dm_args, b_info->extra[i]); + if (b_info->type == LIBXL_DOMAIN_TYPE_HVM || + b_info->type == LIBXL_DOMAIN_TYPE_PVH) { + if (b_info->max_vcpus > 1) { + flexarray_append(dm_args, "-smp"); + if (b_info->avail_vcpus.size) { + int nr_set_cpus = 0; + nr_set_cpus = libxl_bitmap_count_set(&b_info->avail_vcpus); + + flexarray_append(dm_args, GCSPRINTF("%d,maxcpus=%d", + nr_set_cpus, + b_info->max_vcpus)); + } else + flexarray_append(dm_args, GCSPRINTF("%d", b_info->max_vcpus)); + } + } + flexarray_append(dm_args, "-machine"); switch (b_info->type) { case LIBXL_DOMAIN_TYPE_PVH: From patchwork Wed Nov 15 11:26:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergiy Kibrik X-Patchwork-Id: 13456558 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 58791C07548 for ; Wed, 15 Nov 2023 11:26:35 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.633571.988510 (Exim 4.92) (envelope-from ) id 1r3E2M-0004Vk-7w; Wed, 15 Nov 2023 11:26:26 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 633571.988510; Wed, 15 Nov 2023 11:26:26 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1r3E2M-0004Vb-43; Wed, 15 Nov 2023 11:26:26 +0000 Received: by outflank-mailman (input) for mailman id 633571; Wed, 15 Nov 2023 11:26:25 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1r3E2L-00040Y-C9 for xen-devel@lists.xenproject.org; Wed, 15 Nov 2023 11:26:25 +0000 Received: from pb-smtp2.pobox.com (pb-smtp2.pobox.com [64.147.108.71]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id cf16a516-83a9-11ee-98db-6d05b1d4d9a1; Wed, 15 Nov 2023 12:26:24 +0100 (CET) Received: from pb-smtp2.pobox.com (unknown [127.0.0.1]) by pb-smtp2.pobox.com (Postfix) with ESMTP id D393F1C1C59; Wed, 15 Nov 2023 06:26:23 -0500 (EST) (envelope-from sakib@darkstar.site) Received: from pb-smtp2.nyi.icgroup.com (unknown [127.0.0.1]) by pb-smtp2.pobox.com (Postfix) with ESMTP id C90C21C1C58; Wed, 15 Nov 2023 06:26:23 -0500 (EST) (envelope-from sakib@darkstar.site) Received: from localhost (unknown [185.130.54.109]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by pb-smtp2.pobox.com (Postfix) with ESMTPSA id 166421C1C56; Wed, 15 Nov 2023 06:26:22 -0500 (EST) (envelope-from sakib@darkstar.site) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: cf16a516-83a9-11ee-98db-6d05b1d4d9a1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; s=sasl; bh=sk3rL+DVQWnHfmbcVD4VUySak WU6iSq3V+yGTLD5SnE=; b=d65LZKjGp9RuGE/Irs55jSZ1205eAfYsgucZKQ6Fk svlBIePhvi7Q+jm1yIk0jQUQ278uRo5MqWBdP7OEud1ysi/WamDU1Tc+abwrQI2w uwQKK8tnRQXNzr06GI9av+WQNDJJruqI7fDWV6Jy8XUQta2VaN+eKv/ElQg1iZxr ZU= From: Sergiy Kibrik To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel , Volodymyr Babchuk , Sergiy Kibrik Subject: [RFC PATCH 2/6] xen/public: arch-arm: reserve resources for virtio-pci Date: Wed, 15 Nov 2023 13:26:07 +0200 Message-Id: <20231115112611.3865905-3-Sergiy_Kibrik@epam.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231115112611.3865905-1-Sergiy_Kibrik@epam.com> References: <20231115112611.3865905-1-Sergiy_Kibrik@epam.com> MIME-Version: 1.0 X-Pobox-Relay-ID: CE6E5C9E-83A9-11EE-8331-25B3960A682E-90055647!pb-smtp2.pobox.com From: Oleksandr Tyshchenko In order to enable more use-cases such as having multiple device-models (Qemu) running in different backend domains which provide virtio-pci devices for the same guest, we allocate and expose one PCI host bridge for every virtio backend domain for that guest. For that purpose, reserve separate virtio-pci resources (memory and SPI range for Legacy PCI interrupts) up to 8 possible PCI hosts (to be aligned with MAX_NR_IOREQ_SERVERS) and allocate a host per backend domain. The PCI host details including its host_id to be written to dedicated Xenstore node for the device-model to retrieve. Signed-off-by: Oleksandr Tyshchenko Signed-off-by: Sergiy Kibrik --- xen/include/public/arch-arm.h | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h index a25e87dbda..e6c9cd5335 100644 --- a/xen/include/public/arch-arm.h +++ b/xen/include/public/arch-arm.h @@ -466,6 +466,19 @@ typedef uint64_t xen_callback_t; #define GUEST_VPCI_MEM_ADDR xen_mk_ullong(0x23000000) #define GUEST_VPCI_MEM_SIZE xen_mk_ullong(0x10000000) +/* + * 16 MB is reserved for virtio-pci configuration space based on calculation + * 8 bridges * 2 buses x 32 devices x 8 functions x 4 KB = 16 MB + */ +#define GUEST_VIRTIO_PCI_ECAM_BASE xen_mk_ullong(0x33000000) +#define GUEST_VIRTIO_PCI_TOTAL_ECAM_SIZE xen_mk_ullong(0x01000000) +#define GUEST_VIRTIO_PCI_HOST_ECAM_SIZE xen_mk_ullong(0x00200000) + +/* 64 MB is reserved for virtio-pci memory */ +#define GUEST_VIRTIO_PCI_ADDR_TYPE_MEM xen_mk_ullong(0x02000000) +#define GUEST_VIRTIO_PCI_MEM_ADDR xen_mk_ullong(0x34000000) +#define GUEST_VIRTIO_PCI_MEM_SIZE xen_mk_ullong(0x04000000) + /* * 16MB == 4096 pages reserved for guest to use as a region to map its * grant table in. @@ -476,6 +489,11 @@ typedef uint64_t xen_callback_t; #define GUEST_MAGIC_BASE xen_mk_ullong(0x39000000) #define GUEST_MAGIC_SIZE xen_mk_ullong(0x01000000) +/* 64 MB is reserved for virtio-pci Prefetch memory */ +#define GUEST_VIRTIO_PCI_ADDR_TYPE_PREFETCH_MEM xen_mk_ullong(0x42000000) +#define GUEST_VIRTIO_PCI_PREFETCH_MEM_ADDR xen_mk_ullong(0x3a000000) +#define GUEST_VIRTIO_PCI_PREFETCH_MEM_SIZE xen_mk_ullong(0x04000000) + #define GUEST_RAM_BANKS 2 /* @@ -515,6 +533,9 @@ typedef uint64_t xen_callback_t; #define GUEST_VIRTIO_MMIO_SPI_FIRST 33 #define GUEST_VIRTIO_MMIO_SPI_LAST 43 +#define GUEST_VIRTIO_PCI_SPI_FIRST 44 +#define GUEST_VIRTIO_PCI_SPI_LAST 76 + /* PSCI functions */ #define PSCI_cpu_suspend 0 #define PSCI_cpu_off 1 From patchwork Wed Nov 15 11:26:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergiy Kibrik X-Patchwork-Id: 13456561 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 90BA3C47074 for ; Wed, 15 Nov 2023 11:26:43 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.633572.988520 (Exim 4.92) (envelope-from ) id 1r3E2Q-0004r7-Fs; Wed, 15 Nov 2023 11:26:30 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 633572.988520; Wed, 15 Nov 2023 11:26:30 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1r3E2Q-0004qx-CI; Wed, 15 Nov 2023 11:26:30 +0000 Received: by outflank-mailman (input) for mailman id 633572; Wed, 15 Nov 2023 11:26:29 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1r3E2P-00040Y-94 for xen-devel@lists.xenproject.org; Wed, 15 Nov 2023 11:26:29 +0000 Received: from pb-smtp2.pobox.com (pb-smtp2.pobox.com [64.147.108.71]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id d0f83154-83a9-11ee-98db-6d05b1d4d9a1; Wed, 15 Nov 2023 12:26:27 +0100 (CET) Received: from pb-smtp2.pobox.com (unknown [127.0.0.1]) by pb-smtp2.pobox.com (Postfix) with ESMTP id 0AB901C1C8A; Wed, 15 Nov 2023 06:26:27 -0500 (EST) (envelope-from sakib@darkstar.site) Received: from pb-smtp2.nyi.icgroup.com (unknown [127.0.0.1]) by pb-smtp2.pobox.com (Postfix) with ESMTP id 0001C1C1C89; Wed, 15 Nov 2023 06:26:26 -0500 (EST) (envelope-from sakib@darkstar.site) Received: from localhost (unknown [185.130.54.109]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by pb-smtp2.pobox.com (Postfix) with ESMTPSA id D26C31C1C86; Wed, 15 Nov 2023 06:26:25 -0500 (EST) (envelope-from sakib@darkstar.site) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: d0f83154-83a9-11ee-98db-6d05b1d4d9a1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; s=sasl; bh=tL0jiLFXUiMZop4+WOmVXVKpp NGbLDx07OaB8g/dtGQ=; b=FO/egVcSu+BToawqNJIP+zEMWoqMw/xop5/NuV/Du FcmloCR7OsTr1VPvhi0leuutZpZ2k4/ndY09FXwwggmTQQjV3yA79Dpw5T2giAlx 5e/XkAvIAPbgqDFTgjwOw+U8xip2KgacCfjGcfyW8z4EMqEDmGMuy6t4iy4FdkwG 6Q= From: Sergiy Kibrik To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Wei Liu , Anthony PERARD , Juergen Gross , Sergiy Kibrik Subject: [RFC PATCH 3/6] libxl/arm: Add basic virtio-pci support Date: Wed, 15 Nov 2023 13:26:08 +0200 Message-Id: <20231115112611.3865905-4-Sergiy_Kibrik@epam.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231115112611.3865905-1-Sergiy_Kibrik@epam.com> References: <20231115112611.3865905-1-Sergiy_Kibrik@epam.com> MIME-Version: 1.0 X-Pobox-Relay-ID: D0153C34-83A9-11EE-AE21-25B3960A682E-90055647!pb-smtp2.pobox.com From: Oleksandr Tyshchenko Introduce new transport mechanism "pci" for the Virtio device and update parsing and configuration logic accordingly. In order to enable more use-cases such as having multiple device-models (Qemu) running in different backend domains which provide virtio-pci devices for the same guest, we allocate and expose one PCI host bridge for every virtio backend domain for that guest. Also extend PCI Host bridge DT node exposed to the guest by adding bindings for Legacy PCI interrupts (#INTA - #INTD). Signed-off-by: Oleksandr Tyshchenko Signed-off-by: Sergiy Kibrik --- docs/man/xl.cfg.5.pod.in | 9 +- tools/libs/light/libxl_arm.c | 287 ++++++++++++++++++++++++++++-- tools/libs/light/libxl_create.c | 18 +- tools/libs/light/libxl_dm.c | 70 ++++++++ tools/libs/light/libxl_internal.h | 5 + tools/libs/light/libxl_types.idl | 34 +++- tools/libs/light/libxl_virtio.c | 98 +++++++--- tools/xl/xl_parse.c | 36 ++++ 8 files changed, 507 insertions(+), 50 deletions(-) diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in index 2e234b450e..0fba750815 100644 --- a/docs/man/xl.cfg.5.pod.in +++ b/docs/man/xl.cfg.5.pod.in @@ -1616,8 +1616,13 @@ hexadecimal format, without the "0x" prefix and all in lower case, like =item B -Specifies the transport mechanism for the Virtio device, only "mmio" is -supported for now. +Specifies the transport mechanism for the Virtio device, both "mmio" and "pci" +are supported. This option is mandatory. + +=item B + +The Virtio device with transport "pci" must be identified by its B. +See L for more details about the format for B. =item B diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c index 1539191774..df6cbbe756 100644 --- a/tools/libs/light/libxl_arm.c +++ b/tools/libs/light/libxl_arm.c @@ -20,6 +20,11 @@ */ #define VIRTIO_MMIO_DEV_SIZE xen_mk_ullong(0x200) +#define VIRTIO_PCI_HOST_MEM_SIZE xen_mk_ullong(0x800000) +#define VIRTIO_PCI_HOST_PREFETCH_MEM_SIZE xen_mk_ullong(0x800000) +#define VIRTIO_PCI_HOST_NUM_SPIS 4 +#define VIRTIO_PCI_MAX_HOSTS 8 + static uint64_t alloc_virtio_mmio_base(libxl__gc *gc, uint64_t *virtio_mmio_base) { uint64_t base = *virtio_mmio_base; @@ -80,14 +85,101 @@ static const char *gicv_to_string(libxl_gic_version gic_version) } } +static int alloc_virtio_pci_host(libxl__gc *gc, + uint32_t backend_domid, + uint32_t *host_id, + unsigned int *num_hosts, + libxl_virtio_pci_host *hosts) +{ + unsigned int i; + + BUILD_BUG_ON(VIRTIO_PCI_MAX_HOSTS != + GUEST_VIRTIO_PCI_TOTAL_ECAM_SIZE / GUEST_VIRTIO_PCI_HOST_ECAM_SIZE); + BUILD_BUG_ON(VIRTIO_PCI_MAX_HOSTS != + GUEST_VIRTIO_PCI_MEM_SIZE / VIRTIO_PCI_HOST_MEM_SIZE); + BUILD_BUG_ON(VIRTIO_PCI_MAX_HOSTS != + GUEST_VIRTIO_PCI_PREFETCH_MEM_SIZE / VIRTIO_PCI_HOST_PREFETCH_MEM_SIZE); + BUILD_BUG_ON(VIRTIO_PCI_MAX_HOSTS != + (GUEST_VIRTIO_PCI_SPI_LAST - GUEST_VIRTIO_PCI_SPI_FIRST) / VIRTIO_PCI_HOST_NUM_SPIS); + + if (*num_hosts > VIRTIO_PCI_MAX_HOSTS) + return ERROR_INVAL; + + for (i = 0; i < *num_hosts; i++) { + if (hosts[i].backend_domid == backend_domid) { + *host_id = hosts[i].id; + + LOG(DEBUG, "Reuse host #%u: " + "ECAM: 0x%"PRIx64"-0x%"PRIx64" " + "MEM: 0x%"PRIx64"-0x%"PRIx64" " + "PREFETCH_MEM: 0x%"PRIx64"-0x%"PRIx64" " + "IRQ: %u-%u", + hosts[i].id, + hosts[i].ecam_base, + hosts[i].ecam_base + hosts[i].ecam_size - 1, + hosts[i].mem_base, + hosts[i].mem_base + hosts[i].mem_size - 1, + hosts[i].prefetch_mem_base, + hosts[i].prefetch_mem_base + hosts[i].prefetch_mem_size - 1, + hosts[i].irq_first, + hosts[i].irq_first + hosts[i].num_irqs - 1); + + return 0; + } + } + + if (i == VIRTIO_PCI_MAX_HOSTS) { + LOG(ERROR, "Ran out of reserved resources for virtio-pci host\n"); + return ERROR_FAIL; + } + + hosts[i].backend_domid = backend_domid; + hosts[i].id = i; + hosts[i].ecam_base = GUEST_VIRTIO_PCI_ECAM_BASE + + i * GUEST_VIRTIO_PCI_HOST_ECAM_SIZE; + hosts[i].ecam_size = GUEST_VIRTIO_PCI_HOST_ECAM_SIZE; + hosts[i].mem_base = GUEST_VIRTIO_PCI_MEM_ADDR + + i * VIRTIO_PCI_HOST_MEM_SIZE; + hosts[i].mem_size = VIRTIO_PCI_HOST_MEM_SIZE; + hosts[i].prefetch_mem_base = GUEST_VIRTIO_PCI_PREFETCH_MEM_ADDR + + i * VIRTIO_PCI_HOST_PREFETCH_MEM_SIZE; + hosts[i].prefetch_mem_size = VIRTIO_PCI_HOST_PREFETCH_MEM_SIZE; + hosts[i].irq_first = GUEST_VIRTIO_PCI_SPI_FIRST + + i * VIRTIO_PCI_HOST_NUM_SPIS; + hosts[i].num_irqs = VIRTIO_PCI_HOST_NUM_SPIS; + + *host_id = hosts[i].id; + + (*num_hosts)++; + + LOG(DEBUG, "Allocate host #%u: " + "ECAM: 0x%"PRIx64"-0x%"PRIx64" " + "MEM: 0x%"PRIx64"-0x%"PRIx64" " + "PREFETCH_MEM: 0x%"PRIx64"-0x%"PRIx64" " + "IRQ: %u-%u", + hosts[i].id, + hosts[i].ecam_base, + hosts[i].ecam_base + hosts[i].ecam_size - 1, + hosts[i].mem_base, + hosts[i].mem_base + hosts[i].mem_size - 1, + hosts[i].prefetch_mem_base, + hosts[i].prefetch_mem_base + hosts[i].prefetch_mem_size - 1, + hosts[i].irq_first, + hosts[i].irq_first + hosts[i].num_irqs - 1); + + return 0; +} + int libxl__arch_domain_prepare_config(libxl__gc *gc, libxl_domain_config *d_config, struct xen_domctl_createdomain *config) { uint32_t nr_spis = 0; unsigned int i; - uint32_t vuart_irq, virtio_irq = 0; - bool vuart_enabled = false, virtio_enabled = false; + uint32_t vuart_irq, virtio_mmio_irq_last, virtio_pci_irq_last = 0; + bool vuart_enabled = false, virtio_mmio_enabled = false; + unsigned int num_virtio_pci_hosts = 0; + libxl_virtio_pci_host virtio_pci_hosts[VIRTIO_PCI_MAX_HOSTS] = {0}; uint64_t virtio_mmio_base = GUEST_VIRTIO_MMIO_BASE; uint32_t virtio_mmio_irq = GUEST_VIRTIO_MMIO_SPI_FIRST; int rc; @@ -118,11 +210,17 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc, for (i = 0; i < d_config->num_virtios; i++) { libxl_device_virtio *virtio = &d_config->virtios[i]; - if (virtio->transport != LIBXL_VIRTIO_TRANSPORT_MMIO) - continue; - - rc = alloc_virtio_mmio_params(gc, &virtio->base, &virtio->irq, - &virtio_mmio_base, &virtio_mmio_irq); + if (virtio->transport != LIBXL_VIRTIO_TRANSPORT_MMIO) { + rc = alloc_virtio_pci_host(gc, + virtio->backend_domid, + &virtio->u.pci.host_id, + &num_virtio_pci_hosts, + virtio_pci_hosts); + } else { + rc = alloc_virtio_mmio_params(gc, &virtio->u.mmio.base, + &virtio->u.mmio.irq, + &virtio_mmio_base, &virtio_mmio_irq); + } if (rc) return rc; @@ -134,14 +232,25 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc, * The resulting "nr_spis" needs to cover the highest possible SPI. */ if (virtio_mmio_irq != GUEST_VIRTIO_MMIO_SPI_FIRST) { - virtio_enabled = true; + virtio_mmio_enabled = true; /* * Assumes that "virtio_mmio_irq" is the highest allocated irq, which is * updated from alloc_virtio_mmio_irq() currently. */ - virtio_irq = virtio_mmio_irq - 1; - nr_spis = max(nr_spis, virtio_irq - 32 + 1); + virtio_mmio_irq_last = virtio_mmio_irq - 1; + nr_spis = max(nr_spis, virtio_mmio_irq_last - 32 + 1); + } + + if (num_virtio_pci_hosts) { + libxl_virtio_pci_host *host = &virtio_pci_hosts[num_virtio_pci_hosts - 1]; + + /* + * Assumes that latest allocated host contains the highest allocated + * irq range. + */ + virtio_pci_irq_last = host->irq_first + host->num_irqs - 1; + nr_spis = max(nr_spis, virtio_pci_irq_last - 32 + 1); } for (i = 0; i < d_config->b_info.num_irqs; i++) { @@ -164,10 +273,14 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc, } /* The same check as for vpl011 */ - if (virtio_enabled && - (irq >= GUEST_VIRTIO_MMIO_SPI_FIRST && irq <= virtio_irq)) { + if (virtio_mmio_enabled && + (irq >= GUEST_VIRTIO_MMIO_SPI_FIRST && irq <= virtio_mmio_irq_last)) { LOG(ERROR, "Physical IRQ %u conflicting with Virtio MMIO IRQ range\n", irq); return ERROR_FAIL; + } else if (num_virtio_pci_hosts && + (irq >= GUEST_VIRTIO_PCI_SPI_FIRST && irq <= virtio_pci_irq_last)) { + LOG(ERROR, "Physical IRQ %u conflicting with Virtio PCI IRQ range\n", irq); + return ERROR_FAIL; } if (irq < 32) @@ -179,6 +292,14 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc, nr_spis = spi + 1; } + if (num_virtio_pci_hosts) { + d_config->b_info.num_virtio_pci_hosts = num_virtio_pci_hosts; + d_config->b_info.virtio_pci_hosts = libxl__calloc(NOGC, + num_virtio_pci_hosts, sizeof(*d_config->b_info.virtio_pci_hosts)); + memcpy(d_config->b_info.virtio_pci_hosts, virtio_pci_hosts, + sizeof(*d_config->b_info.virtio_pci_hosts) * num_virtio_pci_hosts); + } + LOG(DEBUG, "Configure the domain"); config->arch.nr_spis = nr_spis; @@ -908,6 +1029,130 @@ static int make_vpci_node(libxl__gc *gc, void *fdt, return 0; } +#define PCI_IRQ_MAP_MIN_STRIDE 8 + +static int create_virtio_pci_irq_map(libxl__gc *gc, void *fdt, + libxl_virtio_pci_host *host) +{ + uint32_t *full_irq_map, *irq_map; + size_t len; + unsigned int slot, pin; + int res, cells; + + res = fdt_property_cell(fdt, "#interrupt-cells", 1); + if (res) return res; + + /* assume GIC node to be present, due to + * make_gicv2_node()/make_gicv3_node() get called earlier + */ + res = fdt_node_offset_by_phandle(fdt, GUEST_PHANDLE_GIC); + if (res < 0) + return res; + + res = fdt_address_cells(fdt, res); + /* handle case of make_gicv2_node() setting #address-cells to 0 */ + if (res == -FDT_ERR_BADNCELLS) + res = 0; + else if (res < 0) + return res; + + cells = res; + len = sizeof(uint32_t) * host->num_irqs * host->num_irqs * + (PCI_IRQ_MAP_MIN_STRIDE + cells); + irq_map = full_irq_map = libxl__malloc(gc, len); + + for (slot = 0; slot < host->num_irqs; slot++) { + for (pin = 0; pin < host->num_irqs; pin++) { + uint32_t irq = host->irq_first + ((pin + slot) % host->num_irqs); + unsigned int i = 0; + + /* PCI address (3 cells) */ + irq_map[i++] = cpu_to_fdt32(PCI_DEVFN(slot, 0) << 8); + irq_map[i++] = cpu_to_fdt32(0); + irq_map[i++] = cpu_to_fdt32(0); + + /* PCI interrupt (1 cell) */ + irq_map[i++] = cpu_to_fdt32(pin + 1); + + /* GIC phandle (1 cell) */ + irq_map[i++] = cpu_to_fdt32(GUEST_PHANDLE_GIC); + + /* GIC unit address, set 0 because vgic itself handles vpci IRQs */ + for (int c = cells; c--; irq_map[i++] = cpu_to_fdt32(0)); + + /* GIC interrupt (3 cells) */ + irq_map[i++] = cpu_to_fdt32(0); /* SPI */ + irq_map[i++] = cpu_to_fdt32(irq - 32); + irq_map[i++] = cpu_to_fdt32(DT_IRQ_TYPE_LEVEL_HIGH); + + irq_map += PCI_IRQ_MAP_MIN_STRIDE + cells; + } + } + + res = fdt_property(fdt, "interrupt-map", full_irq_map, len); + if (res) return res; + + res = fdt_property_values(gc, fdt, "interrupt-map-mask", 4, + PCI_DEVFN(3, 0) << 8, 0, 0, 0x7); + if (res) return res; + + return 0; +} + +/* TODO Consider reusing make_vpci_node() */ +static int make_virtio_pci_node(libxl__gc *gc, void *fdt, + libxl_virtio_pci_host *host, + libxl_domain_config *d_config) +{ + int res; + const char *name = GCSPRINTF("pcie@%"PRIx64, host->ecam_base); + + res = fdt_begin_node(fdt, name); + if (res) return res; + + res = fdt_property_compat(gc, fdt, 1, "pci-host-ecam-generic"); + if (res) return res; + + res = fdt_property_string(fdt, "device_type", "pci"); + if (res) return res; + + res = fdt_property_regs(gc, fdt, GUEST_ROOT_ADDRESS_CELLS, + GUEST_ROOT_SIZE_CELLS, 1, host->ecam_base, host->ecam_size); + if (res) return res; + + res = fdt_property_values(gc, fdt, "bus-range", 2, 0, 1); + if (res) return res; + + res = fdt_property_cell(fdt, "#address-cells", 3); + if (res) return res; + + res = fdt_property_cell(fdt, "#size-cells", 2); + if (res) return res; + + res = fdt_property_string(fdt, "status", "okay"); + if (res) return res; + + res = fdt_property_vpci_ranges(gc, fdt, GUEST_ROOT_ADDRESS_CELLS, + GUEST_ROOT_SIZE_CELLS, 2, + GUEST_VIRTIO_PCI_ADDR_TYPE_MEM, host->mem_base, host->mem_size, + GUEST_VIRTIO_PCI_ADDR_TYPE_PREFETCH_MEM, host->prefetch_mem_base, + host->prefetch_mem_size); + if (res) return res; + + /* The same property as for virtio-mmio device */ + res = fdt_property(fdt, "dma-coherent", NULL, 0); + if (res) return res; + + /* Legacy PCI interrupts (#INTA - #INTD) */ + res = create_virtio_pci_irq_map(gc, fdt, host); + if (res) return res; + + res = fdt_end_node(fdt); + if (res) return res; + + return 0; +} + static int make_xen_iommu_node(libxl__gc *gc, void *fdt) { int res; @@ -1384,20 +1629,26 @@ next_resize: for (i = 0; i < d_config->num_virtios; i++) { libxl_device_virtio *virtio = &d_config->virtios[i]; - if (virtio->transport != LIBXL_VIRTIO_TRANSPORT_MMIO) - continue; - if (libxl_defbool_val(virtio->grant_usage)) iommu_needed = true; - FDT( make_virtio_mmio_node_device(gc, fdt, virtio->base, - virtio->irq, virtio->type, + if (virtio->transport != LIBXL_VIRTIO_TRANSPORT_MMIO) + continue; + + FDT( make_virtio_mmio_node_device(gc, fdt, virtio->u.mmio.base, + virtio->u.mmio.irq, virtio->type, virtio->backend_domid, libxl_defbool_val(virtio->grant_usage)) ); } + for (i = 0; i < d_config->b_info.num_virtio_pci_hosts; i++) { + libxl_virtio_pci_host *host = &d_config->b_info.virtio_pci_hosts[i]; + + FDT( make_virtio_pci_node(gc, fdt, host, d_config) ); + } + /* - * The iommu node should be created only once for all virtio-mmio + * The iommu node should be created only once for all virtio * devices. */ if (iommu_needed) diff --git a/tools/libs/light/libxl_create.c b/tools/libs/light/libxl_create.c index ce1d431103..22b4fa40cc 100644 --- a/tools/libs/light/libxl_create.c +++ b/tools/libs/light/libxl_create.c @@ -1273,8 +1273,9 @@ int libxl__domain_config_setdefault(libxl__gc *gc, } for (i = 0; i < d_config->num_virtios; i++) { - ret = libxl__virtio_devtype.set_default(gc, domid, - &d_config->virtios[i], false); + libxl_device_virtio *virtio = &d_config->virtios[i]; + + ret = libxl__virtio_devtype.set_default(gc, domid, virtio, false); if (ret) { LOGD(ERROR, domid, "Unable to set virtio defaults for device %d", i); goto error_out; @@ -1770,6 +1771,19 @@ static void domcreate_launch_dm(libxl__egc *egc, libxl__multidev *multidev, for (i = 0; i < d_config->num_virtios; i++) libxl__device_add(gc, domid, &libxl__virtio_devtype, &d_config->virtios[i]); + /* + * This should be done before spawning device model, but after + * the creation of "device-model" directory in Xenstore. + */ + for (i = 0; i < d_config->b_info.num_virtio_pci_hosts; i++) { + libxl_virtio_pci_host *host = &d_config->b_info.virtio_pci_hosts[i]; + + ret = libxl__save_dm_virtio_pci_host(gc, domid, host); + if (ret) { + LOGD(ERROR, domid, "Unable to save virtio_pci_host for device model"); + goto error_out; + } + } switch (d_config->c_info.type) { case LIBXL_DOMAIN_TYPE_HVM: diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c index 0b2548d35b..4e9391fc08 100644 --- a/tools/libs/light/libxl_dm.c +++ b/tools/libs/light/libxl_dm.c @@ -3375,6 +3375,76 @@ static void device_model_postconfig_done(libxl__egc *egc, dmss->callback(egc, dmss, rc); } +int libxl__save_dm_virtio_pci_host(libxl__gc *gc, + uint32_t domid, + libxl_virtio_pci_host *host) +{ + const char *dm_path; + char **dir; + xs_transaction_t t = XBT_NULL; + unsigned int n; + int rc; + + dm_path = GCSPRINTF("/local/domain/%d/device-model", host->backend_domid); + + dir = libxl__xs_directory(gc, XBT_NULL, dm_path, &n); + if (!dir) + return ERROR_INVAL; + + dm_path = DEVICE_MODEL_XS_PATH(gc, host->backend_domid, domid, "/virtio_pci_host"); + + for (;;) { + rc = libxl__xs_transaction_start(gc, &t); + if (rc) goto out; + + rc = libxl__xs_write_checked(gc, t, GCSPRINTF("%s/id", dm_path), + GCSPRINTF("%u", host->id)); + if (rc) goto out; + + rc = libxl__xs_write_checked(gc, t, GCSPRINTF("%s/ecam_base", dm_path), + GCSPRINTF("%#"PRIx64, host->ecam_base)); + if (rc) goto out; + + rc = libxl__xs_write_checked(gc, t, GCSPRINTF("%s/ecam_size", dm_path), + GCSPRINTF("%#"PRIx64, host->ecam_size)); + if (rc) goto out; + + rc = libxl__xs_write_checked(gc, t, GCSPRINTF("%s/mem_base", dm_path), + GCSPRINTF("%#"PRIx64, host->mem_base)); + if (rc) goto out; + + rc = libxl__xs_write_checked(gc, t, GCSPRINTF("%s/mem_size", dm_path), + GCSPRINTF("%#"PRIx64, host->mem_size)); + if (rc) goto out; + + rc = libxl__xs_write_checked(gc, t, GCSPRINTF("%s/prefetch_mem_base", dm_path), + GCSPRINTF("%#"PRIx64, host->prefetch_mem_base)); + if (rc) goto out; + + rc = libxl__xs_write_checked(gc, t, GCSPRINTF("%s/prefetch_mem_size", dm_path), + GCSPRINTF("%#"PRIx64, host->prefetch_mem_size)); + if (rc) goto out; + + rc = libxl__xs_write_checked(gc, t, GCSPRINTF("%s/irq_first", dm_path), + GCSPRINTF("%u", host->irq_first)); + if (rc) goto out; + + rc = libxl__xs_write_checked(gc, t, GCSPRINTF("%s/num_irqs", dm_path), + GCSPRINTF("%u", host->num_irqs)); + if (rc) goto out; + + rc = libxl__xs_transaction_commit(gc, &t); + if (!rc) break; + if (rc < 0) goto out; + } + + return 0; + +out: + libxl__xs_transaction_abort(gc, &t); + return rc; +} + void libxl__spawn_qdisk_backend(libxl__egc *egc, libxl__dm_spawn_state *dmss) { STATE_AO_GC(dmss->spawn.ao); diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h index d5732d1c37..75d370d739 100644 --- a/tools/libs/light/libxl_internal.h +++ b/tools/libs/light/libxl_internal.h @@ -4199,6 +4199,11 @@ _hidden void libxl__spawn_qdisk_backend(libxl__egc *egc, libxl__dm_spawn_state *dmss); _hidden int libxl__destroy_qdisk_backend(libxl__gc *gc, uint32_t domid); + +_hidden int libxl__save_dm_virtio_pci_host(libxl__gc *gc, + uint32_t domid, + libxl_virtio_pci_host *host); + /*----- Domain creation -----*/ diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl index 7d8bd5d216..a86c601994 100644 --- a/tools/libs/light/libxl_types.idl +++ b/tools/libs/light/libxl_types.idl @@ -281,6 +281,7 @@ libxl_vkb_backend = Enumeration("vkb_backend", [ libxl_virtio_transport = Enumeration("virtio_transport", [ (0, "UNKNOWN"), (1, "MMIO"), + (2, "PCI"), ]) libxl_passthrough = Enumeration("passthrough", [ @@ -558,6 +559,19 @@ libxl_altp2m_mode = Enumeration("altp2m_mode", [ (3, "limited"), ], init_val = "LIBXL_ALTP2M_MODE_DISABLED") +libxl_virtio_pci_host = Struct("virtio_pci_host", [ + ("backend_domid", libxl_domid), + ("id", uint32), + ("ecam_base", uint64), + ("ecam_size", uint64), + ("mem_base", uint64), + ("mem_size", uint64), + ("prefetch_mem_base", uint64), + ("prefetch_mem_size", uint64), + ("irq_first", uint32), + ("num_irqs", uint32), + ]) + libxl_domain_build_info = Struct("domain_build_info",[ ("max_vcpus", integer), ("avail_vcpus", libxl_bitmap), @@ -631,6 +645,7 @@ libxl_domain_build_info = Struct("domain_build_info",[ ("apic", libxl_defbool), ("dm_restrict", libxl_defbool), ("tee", libxl_tee_type), + ("virtio_pci_hosts", Array(libxl_virtio_pci_host, "num_virtio_pci_hosts")), ("u", KeyedUnion(None, libxl_domain_type, "type", [("hvm", Struct(None, [("firmware", string), ("bios", libxl_bios_type), @@ -764,13 +779,22 @@ libxl_device_virtio = Struct("device_virtio", [ ("backend_domid", libxl_domid), ("backend_domname", string), ("type", string), - ("transport", libxl_virtio_transport), + ("u", KeyedUnion(None, libxl_virtio_transport, "transport", + [("unknown", None), + # Note that virtio-mmio parameters (irq and base) are for internal + # use by libxl and can't be modified. + ("mmio", Struct(None, [("irq", uint32), + ("base", uint64), + ])), + ("pci", Struct(None, [("func", uint8), + ("dev", uint8), + ("bus", uint8), + ("domain", uint16), + ("host_id", uint32), + ])), + ])), ("grant_usage", libxl_defbool), ("devid", libxl_devid), - # Note that virtio-mmio parameters (irq and base) are for internal - # use by libxl and can't be modified. - ("irq", uint32), - ("base", uint64) ]) libxl_device_disk = Struct("device_disk", [ diff --git a/tools/libs/light/libxl_virtio.c b/tools/libs/light/libxl_virtio.c index e5e321adc5..8062423c75 100644 --- a/tools/libs/light/libxl_virtio.c +++ b/tools/libs/light/libxl_virtio.c @@ -57,8 +57,21 @@ static int libxl__set_xenstore_virtio(libxl__gc *gc, uint32_t domid, { const char *transport = libxl_virtio_transport_to_string(virtio->transport); - flexarray_append_pair(back, "irq", GCSPRINTF("%u", virtio->irq)); - flexarray_append_pair(back, "base", GCSPRINTF("%#"PRIx64, virtio->base)); + if (virtio->transport == LIBXL_VIRTIO_TRANSPORT_MMIO) { + flexarray_append_pair(back, "irq", GCSPRINTF("%u", virtio->u.mmio.irq)); + flexarray_append_pair(back, "base", GCSPRINTF("%#"PRIx64, virtio->u.mmio.base)); + } else { + /* + * TODO: + * Probably we will also need to store PCI Host bridge details (irq and + * mem ranges) this particular PCI device belongs to if emulator cannot + * or should not rely on what is described at include/public/arch-arm.h + */ + flexarray_append_pair(back, "bdf", GCSPRINTF("%04x:%02x:%02x.%01x", + virtio->u.pci.domain, virtio->u.pci.bus, + virtio->u.pci.dev, virtio->u.pci.func)); + flexarray_append_pair(back, "host_id", GCSPRINTF("%u", virtio->u.pci.host_id)); + } flexarray_append_pair(back, "type", GCSPRINTF("%s", virtio->type)); flexarray_append_pair(back, "transport", GCSPRINTF("%s", transport)); flexarray_append_pair(back, "grant_usage", @@ -84,33 +97,72 @@ static int libxl__virtio_from_xenstore(libxl__gc *gc, const char *libxl_path, rc = libxl__backendpath_parse_domid(gc, be_path, &virtio->backend_domid); if (rc) goto out; - rc = libxl__xs_read_checked(gc, XBT_NULL, - GCSPRINTF("%s/irq", be_path), &tmp); - if (rc) goto out; - - if (tmp) { - virtio->irq = strtoul(tmp, NULL, 0); + tmp = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("%s/transport", be_path)); + if (!tmp) { + LOG(ERROR, "Missing xenstore node %s/transport", be_path); + rc = ERROR_INVAL; + goto out; } - tmp = NULL; - rc = libxl__xs_read_checked(gc, XBT_NULL, - GCSPRINTF("%s/base", be_path), &tmp); - if (rc) goto out; + rc = libxl_virtio_transport_from_string(tmp, &virtio->transport); + if (rc) { + LOG(ERROR, "Unable to parse xenstore node %s/transport", be_path); + goto out; + } - if (tmp) { - virtio->base = strtoul(tmp, NULL, 0); + if (virtio->transport != LIBXL_VIRTIO_TRANSPORT_MMIO && + virtio->transport != LIBXL_VIRTIO_TRANSPORT_PCI) { + LOG(ERROR, "Unexpected transport for virtio"); + rc = ERROR_INVAL; + goto out; } - tmp = NULL; - rc = libxl__xs_read_checked(gc, XBT_NULL, - GCSPRINTF("%s/transport", be_path), &tmp); - if (rc) goto out; + if (virtio->transport == LIBXL_VIRTIO_TRANSPORT_MMIO) { + tmp = NULL; + rc = libxl__xs_read_checked(gc, XBT_NULL, + GCSPRINTF("%s/irq", be_path), &tmp); + if (rc) goto out; - if (tmp) { - if (!strcmp(tmp, "mmio")) { - virtio->transport = LIBXL_VIRTIO_TRANSPORT_MMIO; - } else { - return ERROR_INVAL; + if (tmp) { + virtio->u.mmio.irq = strtoul(tmp, NULL, 0); + } + + tmp = NULL; + rc = libxl__xs_read_checked(gc, XBT_NULL, + GCSPRINTF("%s/base", be_path), &tmp); + if (rc) goto out; + + if (tmp) { + virtio->u.mmio.base = strtoul(tmp, NULL, 0); + } + } else { + unsigned int domain, bus, dev, func; + + tmp = NULL; + rc = libxl__xs_read_checked(gc, XBT_NULL, + GCSPRINTF("%s/bdf", be_path), &tmp); + if (rc) goto out; + + if (tmp) { + if (sscanf(tmp, "%04x:%02x:%02x.%01x", + &domain, &bus, &dev, &func) != 4) { + rc = ERROR_INVAL; + goto out; + } + + virtio->u.pci.domain = domain; + virtio->u.pci.bus = bus; + virtio->u.pci.dev = dev; + virtio->u.pci.func = func; + } + + tmp = NULL; + rc = libxl__xs_read_checked(gc, XBT_NULL, + GCSPRINTF("%s/host_id", be_path), &tmp); + if (rc) goto out; + + if (tmp) { + virtio->u.pci.host_id = strtoul(tmp, NULL, 0); } } diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c index ed983200c3..4544ce2307 100644 --- a/tools/xl/xl_parse.c +++ b/tools/xl/xl_parse.c @@ -1217,6 +1217,24 @@ static int parse_virtio_config(libxl_device_virtio *virtio, char *token) if (rc) return rc; } else if (MATCH_OPTION("grant_usage", token, oparg)) { libxl_defbool_set(&virtio->grant_usage, strtoul(oparg, NULL, 0)); + } else if (MATCH_OPTION("bdf", token, oparg)) { + /* + * TODO: + * We pretend that we are ordinary PCI device to reuse BDF parsing + * logic. This needs to be properly reused by adjusting parse_bdf(). + */ + libxl_device_pci pci; + + rc = xlu_pci_parse_bdf(NULL, &pci, oparg); + if (rc) { + fprintf(stderr, "Unable to parse BDF `%s' for virtio-pci\n", oparg); + return -1; + } + + virtio->u.pci.domain = pci.domain; + virtio->u.pci.bus = pci.bus; + virtio->u.pci.dev = pci.dev; + virtio->u.pci.func = pci.func; } else { fprintf(stderr, "Unknown string \"%s\" in virtio spec\n", token); return -1; @@ -1238,6 +1256,7 @@ static void parse_virtio_list(const XLU_Config *config, while ((item = xlu_cfg_get_listitem(virtios, entry)) != NULL) { libxl_device_virtio *virtio; char *p; + bool bdf_present = false; virtio = ARRAY_EXTEND_INIT(d_config->virtios, d_config->num_virtios, libxl_device_virtio_init); @@ -1260,6 +1279,8 @@ static void parse_virtio_list(const XLU_Config *config, strcat(str, p2); p = str; } + } else if (MATCH_OPTION("bdf", p, oparg)) { + bdf_present = true; } rc = parse_virtio_config(virtio, p); @@ -1270,6 +1291,21 @@ static void parse_virtio_list(const XLU_Config *config, p = strtok(NULL, ","); } + if (virtio->transport == LIBXL_VIRTIO_TRANSPORT_UNKNOWN) { + fprintf(stderr, "Unspecified transport for virtio\n"); + rc = ERROR_FAIL; goto out; + } + + if (virtio->transport == LIBXL_VIRTIO_TRANSPORT_PCI && + !bdf_present) { + fprintf(stderr, "BDF must be specified for virtio-pci\n"); + rc = ERROR_FAIL; goto out; + } else if (virtio->transport == LIBXL_VIRTIO_TRANSPORT_MMIO && + bdf_present) { + fprintf(stderr, "BDF must not be specified for virtio-mmio\n"); + rc = ERROR_FAIL; goto out; + } + entry++; free(buf); } From patchwork Wed Nov 15 11:26:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergiy Kibrik X-Patchwork-Id: 13456560 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 683A4C07548 for ; Wed, 15 Nov 2023 11:26:41 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.633573.988530 (Exim 4.92) (envelope-from ) id 1r3E2T-0005AH-0P; Wed, 15 Nov 2023 11:26:33 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 633573.988530; Wed, 15 Nov 2023 11:26:32 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1r3E2S-0005A4-T4; Wed, 15 Nov 2023 11:26:32 +0000 Received: by outflank-mailman (input) for mailman id 633573; Wed, 15 Nov 2023 11:26:31 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1r3E2R-00040Y-1C for xen-devel@lists.xenproject.org; Wed, 15 Nov 2023 11:26:31 +0000 Received: from pb-smtp2.pobox.com (pb-smtp2.pobox.com [64.147.108.71]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id d285dfd3-83a9-11ee-98db-6d05b1d4d9a1; Wed, 15 Nov 2023 12:26:30 +0100 (CET) Received: from pb-smtp2.pobox.com (unknown [127.0.0.1]) by pb-smtp2.pobox.com (Postfix) with ESMTP id AC6461C1C8F; Wed, 15 Nov 2023 06:26:29 -0500 (EST) (envelope-from sakib@darkstar.site) Received: from pb-smtp2.nyi.icgroup.com (unknown [127.0.0.1]) by pb-smtp2.pobox.com (Postfix) with ESMTP id A4E641C1C8E; Wed, 15 Nov 2023 06:26:29 -0500 (EST) (envelope-from sakib@darkstar.site) Received: from localhost (unknown [185.130.54.109]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by pb-smtp2.pobox.com (Postfix) with ESMTPSA id CA04C1C1C8B; Wed, 15 Nov 2023 06:26:28 -0500 (EST) (envelope-from sakib@darkstar.site) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: d285dfd3-83a9-11ee-98db-6d05b1d4d9a1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; s=sasl; bh=TAYZsg3dk4AhNSgCgF5CsTSHP MVtmpNUV/b5lNoFXUI=; b=BOUlGVnU12xONByr5wFXUFxEe3/9jeMolBA7kh+oz wQvdNc0vifwVg0qMLw4BsF1ftgoBxDMnR73dNhMmQjC4mlPPK6Ec1HocxlFEFTWb PBB94zQYSfJavWHndkION4uXWW/BC1hmfszdHS7A/PcLw5l+YR9CBzsgv7f2QiNr Xk= From: Sergiy Kibrik To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Wei Liu , Anthony PERARD , Juergen Gross , Sergiy Kibrik Subject: [RFC PATCH 4/6] libxl/arm: Reuse generic PCI-IOMMU bindings for virtio-pci devices Date: Wed, 15 Nov 2023 13:26:09 +0200 Message-Id: <20231115112611.3865905-5-Sergiy_Kibrik@epam.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231115112611.3865905-1-Sergiy_Kibrik@epam.com> References: <20231115112611.3865905-1-Sergiy_Kibrik@epam.com> MIME-Version: 1.0 X-Pobox-Relay-ID: D1D9D840-83A9-11EE-B925-25B3960A682E-90055647!pb-smtp2.pobox.com From: Oleksandr Tyshchenko Use the same "xen-grant-dma" device concept for the PCI devices behind device-tree based PCI Host controller, but with one modification. Unlike for platform devices, we cannot use generic IOMMU bindings (iommus property), as we need to support more flexible configuration. The problem is that PCI devices under the single PCI Host controller may have the backends running in different Xen domains and thus have different endpoints ID (backend domains ID). Reuse generic PCI-IOMMU bindings (iommu-map/iommu-map-mask properties) which allows us to describe relationship between PCI devices and backend domains ID properly. Linux guest is already able to deal with generic PCI-IOMMU bindings (see Linux drivers/xen/grant-dma-ops.c for details). According to Linux: - Documentation/devicetree/bindings/pci/pci-iommu.txt - Documentation/devicetree/bindings/iommu/xen,grant-dma.yaml Signed-off-by: Oleksandr Tyshchenko Signed-off-by: Sergiy Kibrik --- tools/libs/light/libxl_arm.c | 64 ++++++++++++++++++++++++++++++++++++ 1 file changed, 64 insertions(+) diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c index df6cbbe756..03cf3e424b 100644 --- a/tools/libs/light/libxl_arm.c +++ b/tools/libs/light/libxl_arm.c @@ -1030,6 +1030,7 @@ static int make_vpci_node(libxl__gc *gc, void *fdt, } #define PCI_IRQ_MAP_MIN_STRIDE 8 +#define PCI_IOMMU_MAP_STRIDE 4 static int create_virtio_pci_irq_map(libxl__gc *gc, void *fdt, libxl_virtio_pci_host *host) @@ -1099,6 +1100,65 @@ static int create_virtio_pci_irq_map(libxl__gc *gc, void *fdt, return 0; } +/* XXX Consider reusing libxl__realloc() to avoid an extra loop */ +static int create_virtio_pci_iommu_map(libxl__gc *gc, void *fdt, + libxl_virtio_pci_host *host, + libxl_domain_config *d_config) +{ + uint32_t *full_iommu_map, *iommu_map; + unsigned int i, len, ntranslated = 0; + int res; + + for (i = 0; i < d_config->num_virtios; i++) { + libxl_device_virtio *virtio = &d_config->virtios[i]; + + if (libxl_defbool_val(virtio->grant_usage) && + virtio->transport == LIBXL_VIRTIO_TRANSPORT_PCI && + virtio->u.pci.host_id == host->id) { + ntranslated++; + } + } + + if (!ntranslated) + return 0; + + len = ntranslated * sizeof(uint32_t) * PCI_IOMMU_MAP_STRIDE; + full_iommu_map = libxl__malloc(gc, len); + iommu_map = full_iommu_map; + + /* See Linux Documentation/devicetree/bindings/pci/pci-iommu.txt */ + for (i = 0; i < d_config->num_virtios; i++) { + libxl_device_virtio *virtio = &d_config->virtios[i]; + + if (libxl_defbool_val(virtio->grant_usage) && + virtio->transport == LIBXL_VIRTIO_TRANSPORT_PCI && + virtio->u.pci.host_id == host->id) { + uint16_t bdf = (virtio->u.pci.bus << 8) | + (virtio->u.pci.dev << 3) | virtio->u.pci.func; + unsigned int j = 0; + + /* rid_base (1 cell) */ + iommu_map[j++] = cpu_to_fdt32(bdf); + + /* iommu_phandle (1 cell) */ + iommu_map[j++] = cpu_to_fdt32(GUEST_PHANDLE_IOMMU); + + /* iommu_base (1 cell) */ + iommu_map[j++] = cpu_to_fdt32(virtio->backend_domid); + + /* length (1 cell) */ + iommu_map[j++] = cpu_to_fdt32(1 << 3); + + iommu_map += PCI_IOMMU_MAP_STRIDE; + } + } + + res = fdt_property(fdt, "iommu-map", full_iommu_map, len); + if (res) return res; + + return 0; +} + /* TODO Consider reusing make_vpci_node() */ static int make_virtio_pci_node(libxl__gc *gc, void *fdt, libxl_virtio_pci_host *host, @@ -1147,6 +1207,10 @@ static int make_virtio_pci_node(libxl__gc *gc, void *fdt, res = create_virtio_pci_irq_map(gc, fdt, host); if (res) return res; + /* xen,grant-dma bindings */ + res = create_virtio_pci_iommu_map(gc, fdt, host, d_config); + if (res) return res; + res = fdt_end_node(fdt); if (res) return res; From patchwork Wed Nov 15 11:26:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergiy Kibrik X-Patchwork-Id: 13456562 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B4E89C07548 for ; Wed, 15 Nov 2023 11:26:49 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.633574.988540 (Exim 4.92) (envelope-from ) id 1r3E2X-0005av-AP; Wed, 15 Nov 2023 11:26:37 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 633574.988540; Wed, 15 Nov 2023 11:26:37 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1r3E2X-0005aJ-6W; Wed, 15 Nov 2023 11:26:37 +0000 Received: by outflank-mailman (input) for mailman id 633574; Wed, 15 Nov 2023 11:26:36 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1r3E2W-0005V6-17 for xen-devel@lists.xenproject.org; Wed, 15 Nov 2023 11:26:36 +0000 Received: from pb-smtp2.pobox.com (pb-smtp2.pobox.com [64.147.108.71]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id d3fb67b3-83a9-11ee-9b0e-b553b5be7939; Wed, 15 Nov 2023 12:26:32 +0100 (CET) Received: from pb-smtp2.pobox.com (unknown [127.0.0.1]) by pb-smtp2.pobox.com (Postfix) with ESMTP id 36B251C1C99; Wed, 15 Nov 2023 06:26:32 -0500 (EST) (envelope-from sakib@darkstar.site) Received: from pb-smtp2.nyi.icgroup.com (unknown [127.0.0.1]) by pb-smtp2.pobox.com (Postfix) with ESMTP id 2EF4C1C1C96; Wed, 15 Nov 2023 06:26:32 -0500 (EST) (envelope-from sakib@darkstar.site) Received: from localhost (unknown [185.130.54.109]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by pb-smtp2.pobox.com (Postfix) with ESMTPSA id 7779B1C1C94; Wed, 15 Nov 2023 06:26:31 -0500 (EST) (envelope-from sakib@darkstar.site) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: d3fb67b3-83a9-11ee-9b0e-b553b5be7939 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; s=sasl; bh=bJ1K9417KGsGUq+XU588aVVIf /tv3/XL1FVqkz2VF7A=; b=vn9D5N5Q99fKdsV0zwrWAywVo5Q/2YxKPG6uSMGrp ERvWdm76Gsx8DGi0ko/UnV1gFsT1iv+V+VjOS356Z5KrE2NuS8DbMwFSCym41rW/ I/JUmehNOqs3lXwxHdH9jgrIIyqqEtxf54XQVGeXtl+NjHaTQp5UMMK5gw+xOeM7 7M= From: Sergiy Kibrik To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel , Volodymyr Babchuk , Sergiy Kibrik Subject: [RFC PATCH 5/6] xen/arm: Intercept vPCI config accesses and forward them to emulator Date: Wed, 15 Nov 2023 13:26:10 +0200 Message-Id: <20231115112611.3865905-6-Sergiy_Kibrik@epam.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231115112611.3865905-1-Sergiy_Kibrik@epam.com> References: <20231115112611.3865905-1-Sergiy_Kibrik@epam.com> MIME-Version: 1.0 X-Pobox-Relay-ID: D36F423A-83A9-11EE-A32A-25B3960A682E-90055647!pb-smtp2.pobox.com From: Oleksandr Tyshchenko This is needed for supporting virtio-pci. In case when the PCI Host bridge is emulated outside of Xen (IOREQ server), we need some mechanism to intercept config space accesses on Xen on Arm, and forward them to the emulator (for example, virtio backend) via IOREQ request. Unlike x86, on Arm these accesses are MMIO, there is no CFC/CF8 method to detect which PCI device is targeted. In order to not mix PCI passthrough with virtio-pci features we add one more region to cover the total configuration space for all possible host bridges which can serve virtio-pci devices for that guest. We expose one PCI host bridge per virtio backend domain. To distinguish between virtio-pci devices belonging to PCI host bridges emulated by device-models running in different backend domains we also need to calculate a segment in virtio_pci_ioreq_server_get_addr(). For this to work the toolstack is responsible to allocate and assign unique configuration space range and segment (host_id) within total reserved range for these device-models. The device-models are responsible for applying a segment when forming DM op for registering PCI devices with IOREQ Server. Introduce new CONFIG_VIRTIO_PCI to guard the whole handling. Signed-off-by: Oleksandr Tyshchenko Signed-off-by: Sergiy Kibrik --- xen/arch/arm/Kconfig | 10 +++ xen/arch/arm/domain.c | 2 +- xen/arch/arm/{ => include/asm}/vpci.h | 11 +++ xen/arch/arm/io.c | 8 +- xen/arch/arm/ioreq.c | 19 ++++- xen/arch/arm/vpci.c | 106 +++++++++++++++++++++++++- 6 files changed, 146 insertions(+), 10 deletions(-) rename xen/arch/arm/{ => include/asm}/vpci.h (75%) diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig index 2939db429b..9e8d6c4ce2 100644 --- a/xen/arch/arm/Kconfig +++ b/xen/arch/arm/Kconfig @@ -190,6 +190,16 @@ config STATIC_SHM help This option enables statically shared memory on a dom0less system. +config VIRTIO_PCI + bool "Support of virtio-pci transport" if EXPERT + depends on ARM_64 + select HAS_PCI + select HAS_VPCI + select IOREQ_SERVER + default n + help + This option enables support of virtio-pci transport + endmenu menu "ARM errata workaround via the alternative framework" diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index 28e3aaa5e4..140f9bbd58 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -28,9 +28,9 @@ #include #include #include +#include #include -#include "vpci.h" #include "vuart.h" DEFINE_PER_CPU(struct vcpu *, curr_vcpu); diff --git a/xen/arch/arm/vpci.h b/xen/arch/arm/include/asm/vpci.h similarity index 75% rename from xen/arch/arm/vpci.h rename to xen/arch/arm/include/asm/vpci.h index 3c713f3fcd..54d083c67b 100644 --- a/xen/arch/arm/vpci.h +++ b/xen/arch/arm/include/asm/vpci.h @@ -30,6 +30,17 @@ static inline unsigned int domain_vpci_get_num_mmio_handlers(struct domain *d) } #endif +#ifdef CONFIG_VIRTIO_PCI +bool virtio_pci_ioreq_server_get_addr(const struct domain *d, + paddr_t gpa, uint64_t *addr); +#else +static inline bool virtio_pci_ioreq_server_get_addr(const struct domain *d, + paddr_t gpa, uint64_t *addr) +{ + return false; +} +#endif + #endif /* __ARCH_ARM_VPCI_H__ */ /* diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c index 96c740d563..5c3e03e30d 100644 --- a/xen/arch/arm/io.c +++ b/xen/arch/arm/io.c @@ -26,6 +26,7 @@ static enum io_state handle_read(const struct mmio_handler *handler, { const struct hsr_dabt dabt = info->dabt; struct cpu_user_regs *regs = guest_cpu_user_regs(); + int ret; /* * Initialize to zero to avoid leaking data if there is an * implementation error in the emulation (such as not correctly @@ -33,8 +34,9 @@ static enum io_state handle_read(const struct mmio_handler *handler, */ register_t r = 0; - if ( !handler->ops->read(v, info, &r, handler->priv) ) - return IO_ABORT; + ret = handler->ops->read(v, info, &r, handler->priv); + if ( ret != IO_HANDLED ) + return ret != IO_RETRY ? IO_ABORT : ret; r = sign_extend(dabt, r); @@ -53,7 +55,7 @@ static enum io_state handle_write(const struct mmio_handler *handler, ret = handler->ops->write(v, info, get_user_reg(regs, dabt.reg), handler->priv); - return ret ? IO_HANDLED : IO_ABORT; + return ret != IO_HANDLED && ret != IO_RETRY ? IO_ABORT : ret; } /* This function assumes that mmio regions are not overlapped */ diff --git a/xen/arch/arm/ioreq.c b/xen/arch/arm/ioreq.c index 5df755b48b..fd4cc755b6 100644 --- a/xen/arch/arm/ioreq.c +++ b/xen/arch/arm/ioreq.c @@ -10,6 +10,7 @@ #include #include +#include #include @@ -193,12 +194,24 @@ bool arch_ioreq_server_get_type_addr(const struct domain *d, uint8_t *type, uint64_t *addr) { + uint64_t pci_addr; + if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO ) return false; - *type = (p->type == IOREQ_TYPE_PIO) ? - XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY; - *addr = p->addr; + if ( p->type == IOREQ_TYPE_COPY && + virtio_pci_ioreq_server_get_addr(d, p->addr, &pci_addr) ) + { + /* PCI config data cycle */ + *type = XEN_DMOP_IO_RANGE_PCI; + *addr = pci_addr; + } + else + { + *type = (p->type == IOREQ_TYPE_PIO) ? + XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY; + *addr = p->addr; + } return true; } diff --git a/xen/arch/arm/vpci.c b/xen/arch/arm/vpci.c index 3bc4bb5508..1de4c3e71b 100644 --- a/xen/arch/arm/vpci.c +++ b/xen/arch/arm/vpci.c @@ -2,9 +2,12 @@ /* * xen/arch/arm/vpci.c */ +#include #include +#include #include +#include #include static pci_sbdf_t vpci_sbdf_from_gpa(const struct pci_host_bridge *bridge, @@ -24,6 +27,27 @@ static pci_sbdf_t vpci_sbdf_from_gpa(const struct pci_host_bridge *bridge, return sbdf; } +bool virtio_pci_ioreq_server_get_addr(const struct domain *d, + paddr_t gpa, uint64_t *addr) +{ + pci_sbdf_t sbdf; + + if ( !has_vpci(d) ) + return false; + + if ( gpa < GUEST_VIRTIO_PCI_ECAM_BASE || + gpa >= GUEST_VIRTIO_PCI_ECAM_BASE + GUEST_VIRTIO_PCI_TOTAL_ECAM_SIZE ) + return false; + + sbdf.sbdf = VPCI_ECAM_BDF((gpa - GUEST_VIRTIO_PCI_ECAM_BASE) % + GUEST_VIRTIO_PCI_HOST_ECAM_SIZE); + sbdf.seg = (gpa - GUEST_VIRTIO_PCI_ECAM_BASE) / + GUEST_VIRTIO_PCI_HOST_ECAM_SIZE; + *addr = ((uint64_t)sbdf.sbdf << 32) | ECAM_REG_OFFSET(gpa); + + return true; +} + static int vpci_mmio_read(struct vcpu *v, mmio_info_t *info, register_t *r, void *p) { @@ -36,12 +60,12 @@ static int vpci_mmio_read(struct vcpu *v, mmio_info_t *info, 1U << info->dabt.size, &data) ) { *r = data; - return 1; + return IO_HANDLED; } *r = ~0ul; - return 0; + return IO_ABORT; } static int vpci_mmio_write(struct vcpu *v, mmio_info_t *info, @@ -59,6 +83,61 @@ static const struct mmio_handler_ops vpci_mmio_handler = { .write = vpci_mmio_write, }; +#ifdef CONFIG_VIRTIO_PCI +static int virtio_pci_mmio_read(struct vcpu *v, mmio_info_t *info, + register_t *r, void *p) +{ + const uint8_t access_size = (1 << info->dabt.size) * 8; + const uint64_t access_mask = GENMASK_ULL(access_size - 1, 0); + int rc = IO_HANDLED; + + ASSERT(!is_hardware_domain(v->domain)); + + if ( domain_has_ioreq_server(v->domain) ) + { + rc = try_fwd_ioserv(guest_cpu_user_regs(), v, info); + if ( rc == IO_HANDLED ) + { + *r = v->io.req.data; + v->io.req.state = STATE_IOREQ_NONE; + return IO_HANDLED; + } + else if ( rc == IO_UNHANDLED ) + rc = IO_HANDLED; + } + + *r = access_mask; + return rc; +} + +static int virtio_pci_mmio_write(struct vcpu *v, mmio_info_t *info, + register_t r, void *p) +{ + int rc = IO_HANDLED; + + ASSERT(!is_hardware_domain(v->domain)); + + if ( domain_has_ioreq_server(v->domain) ) + { + rc = try_fwd_ioserv(guest_cpu_user_regs(), v, info); + if ( rc == IO_HANDLED ) + { + v->io.req.state = STATE_IOREQ_NONE; + return IO_HANDLED; + } + else if ( rc == IO_UNHANDLED ) + rc = IO_HANDLED; + } + + return rc; +} + +static const struct mmio_handler_ops virtio_pci_mmio_handler = { + .read = virtio_pci_mmio_read, + .write = virtio_pci_mmio_write, +}; +#endif + static int vpci_setup_mmio_handler_cb(struct domain *d, struct pci_host_bridge *bridge) { @@ -90,9 +169,17 @@ int domain_vpci_init(struct domain *d) return ret; } else + { register_mmio_handler(d, &vpci_mmio_handler, GUEST_VPCI_ECAM_BASE, GUEST_VPCI_ECAM_SIZE, NULL); +#ifdef CONFIG_VIRTIO_PCI + register_mmio_handler(d, &virtio_pci_mmio_handler, + GUEST_VIRTIO_PCI_ECAM_BASE, + GUEST_VIRTIO_PCI_TOTAL_ECAM_SIZE, NULL); +#endif + } + return 0; } @@ -105,6 +192,8 @@ static int vpci_get_num_handlers_cb(struct domain *d, unsigned int domain_vpci_get_num_mmio_handlers(struct domain *d) { + unsigned int count; + if ( !has_vpci(d) ) return 0; @@ -125,7 +214,18 @@ unsigned int domain_vpci_get_num_mmio_handlers(struct domain *d) * For guests each host bridge requires one region to cover the * configuration space. At the moment, we only expose a single host bridge. */ - return 1; + count = 1; + + /* + * In order to not mix PCI passthrough with virtio-pci features we add + * one more region to cover the total configuration space for all possible + * host bridges which can serve virtio devices for that guest. + * We expose one host bridge per virtio backend domain. + */ + if ( IS_ENABLED(CONFIG_VIRTIO_PCI) ) + count++; + + return count; } /* From patchwork Wed Nov 15 11:44:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergiy Kibrik X-Patchwork-Id: 13456567 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1E7E7C07548 for ; Wed, 15 Nov 2023 11:44:46 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.633593.988570 (Exim 4.92) (envelope-from ) id 1r3EJy-0003yZ-Ch; Wed, 15 Nov 2023 11:44:38 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 633593.988570; Wed, 15 Nov 2023 11:44:38 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1r3EJy-0003yM-9R; Wed, 15 Nov 2023 11:44:38 +0000 Received: by outflank-mailman (input) for mailman id 633593; Wed, 15 Nov 2023 11:44:36 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1r3EJw-0003xI-Mb for xen-devel@lists.xenproject.org; Wed, 15 Nov 2023 11:44:36 +0000 Received: from pb-smtp21.pobox.com (pb-smtp21.pobox.com [173.228.157.53]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 57faf81d-83ac-11ee-9b0e-b553b5be7939; Wed, 15 Nov 2023 12:44:34 +0100 (CET) Received: from pb-smtp21.pobox.com (unknown [127.0.0.1]) by pb-smtp21.pobox.com (Postfix) with ESMTP id 11D053168E; Wed, 15 Nov 2023 06:44:32 -0500 (EST) (envelope-from sakib@darkstar.site) Received: from pb-smtp21.sea.icgroup.com (unknown [127.0.0.1]) by pb-smtp21.pobox.com (Postfix) with ESMTP id F0D533168D; Wed, 15 Nov 2023 06:44:31 -0500 (EST) (envelope-from sakib@darkstar.site) Received: from localhost (unknown [185.130.54.109]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by pb-smtp21.pobox.com (Postfix) with ESMTPSA id 81D3531686; Wed, 15 Nov 2023 06:44:28 -0500 (EST) (envelope-from sakib@darkstar.site) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 57faf81d-83ac-11ee-9b0e-b553b5be7939 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc :subject:date:message-id:mime-version:content-transfer-encoding; s=sasl; bh=1byc8tTDalGgfp9r8WtxNSc0wnf6PEpeIh9WLNVUOYE=; b=pIJb dSgT9UVyiPRk05M4ckO/xEThRgyiIOQUWhSrmAqKAUtZ6a9kS/btfY3YJHSvo5Rm 1W5zWfwdCntZOuGCcus98TBP80ybOuASlzm6YpkkfEn0QS2yd5xKEusPenMVY+mU I0xp/Ap9G/y1XAHR5apJ9p8NBHLwMnB1WzInEVg= From: Sergiy Kibrik To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Wei Liu , Anthony PERARD , Juergen Gross , Sergiy Kibrik Subject: [RFC PATCH 6/6] libxl: Add "backend_type" property for the Virtio devices Date: Wed, 15 Nov 2023 13:44:24 +0200 Message-Id: <20231115114424.3867133-1-Sergiy_Kibrik@epam.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Pobox-Relay-ID: 5566DE5E-83AC-11EE-BD2C-A19503B9AAD1-90055647!pb-smtp21.pobox.com From: Oleksandr Tyshchenko Introduce new configuration option "backend_type" for the Virtio devices in order to specify backend implementation to use. There are two possible values "qemu" (default) and "standalone". If backend is in Qemu (backend_type=qemu) and Qemu runs in toolstack domain (backend=Domain-0) then Qemu will be launched automatically at the guest creation time. For this to work implement "dm_needed" callback. Please note, there is no support for Qemu in other domains for the time being, so the combination of "backend=DomD" and "backend_type=qemu" just won't work. Qemu configuration for Virtio devices should be described via "device_model_args" property. Signed-off-by: Oleksandr Tyshchenko Signed-off-by: Sergiy Kibrik Reviewed-by: Juergen Gross --- docs/man/xl.cfg.5.pod.in | 9 +++++++++ tools/libs/light/libxl_types.idl | 7 +++++++ tools/libs/light/libxl_virtio.c | 29 ++++++++++++++++++++++++++++- tools/xl/xl_parse.c | 3 +++ 4 files changed, 47 insertions(+), 1 deletion(-) diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in index 0fba750815..592aad1d1e 100644 --- a/docs/man/xl.cfg.5.pod.in +++ b/docs/man/xl.cfg.5.pod.in @@ -1624,6 +1624,15 @@ are supported. This option is mandatory. The Virtio device with transport "pci" must be identified by its B. See L for more details about the format for B. +=item B + +Specifies the software implementation of the backend implementation to use. +This option doesn't affect the guest's view of the Virtio device. + +Both "qemu" and "standalone" are supported. The only difference is +that for the former the toolstack assists with configuring and launching +the device-model. If this option is missing, then "qemu" value will be used. + =item B If this option is B, the Xen grants are always enabled. diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl index a86c601994..13b8ade41c 100644 --- a/tools/libs/light/libxl_types.idl +++ b/tools/libs/light/libxl_types.idl @@ -284,6 +284,12 @@ libxl_virtio_transport = Enumeration("virtio_transport", [ (2, "PCI"), ]) +libxl_virtio_backend = Enumeration("virtio_backend", [ + (0, "UNKNOWN"), + (1, "QEMU"), + (2, "STANDALONE"), + ]) + libxl_passthrough = Enumeration("passthrough", [ (0, "default"), (1, "disabled"), @@ -778,6 +784,7 @@ libxl_device_vkb = Struct("device_vkb", [ libxl_device_virtio = Struct("device_virtio", [ ("backend_domid", libxl_domid), ("backend_domname", string), + ("backend_type", libxl_virtio_backend), ("type", string), ("u", KeyedUnion(None, libxl_virtio_transport, "transport", [("unknown", None), diff --git a/tools/libs/light/libxl_virtio.c b/tools/libs/light/libxl_virtio.c index 8062423c75..339a2006f0 100644 --- a/tools/libs/light/libxl_virtio.c +++ b/tools/libs/light/libxl_virtio.c @@ -32,9 +32,20 @@ static int libxl__device_virtio_setdefault(libxl__gc *gc, uint32_t domid, libxl_defbool_setdefault(&virtio->grant_usage, virtio->backend_domid != LIBXL_TOOLSTACK_DOMID); + if (virtio->backend_type == LIBXL_VIRTIO_BACKEND_UNKNOWN) + virtio->backend_type = LIBXL_VIRTIO_BACKEND_QEMU; + return 0; } +static int libxl__device_virtio_dm_needed(void *e, unsigned domid) +{ + libxl_device_virtio *elem = e; + + return elem->backend_type == LIBXL_VIRTIO_BACKEND_QEMU && + elem->backend_domid == domid; +} + static int libxl__device_from_virtio(libxl__gc *gc, uint32_t domid, libxl_device_virtio *virtio, libxl__device *device) @@ -55,7 +66,8 @@ static int libxl__set_xenstore_virtio(libxl__gc *gc, uint32_t domid, flexarray_t *back, flexarray_t *front, flexarray_t *ro_front) { - const char *transport = libxl_virtio_transport_to_string(virtio->transport); + const char *transport = libxl_virtio_transport_to_string(virtio->transport), + *backend = libxl_virtio_backend_to_string(virtio->backend_type); if (virtio->transport == LIBXL_VIRTIO_TRANSPORT_MMIO) { flexarray_append_pair(back, "irq", GCSPRINTF("%u", virtio->u.mmio.irq)); @@ -74,6 +86,7 @@ static int libxl__set_xenstore_virtio(libxl__gc *gc, uint32_t domid, } flexarray_append_pair(back, "type", GCSPRINTF("%s", virtio->type)); flexarray_append_pair(back, "transport", GCSPRINTF("%s", transport)); + flexarray_append_pair(back, "backend_type", GCSPRINTF("%s", backend)); flexarray_append_pair(back, "grant_usage", libxl_defbool_val(virtio->grant_usage) ? "1" : "0"); @@ -166,6 +179,19 @@ static int libxl__virtio_from_xenstore(libxl__gc *gc, const char *libxl_path, } } + tmp = NULL; + rc = libxl__xs_read_checked(gc, XBT_NULL, + GCSPRINTF("%s/backend_type", be_path), &tmp); + if (rc) goto out; + + if (tmp) { + rc = libxl_virtio_backend_from_string(tmp, &virtio->backend_type); + if (rc) { + LOG(ERROR, "Unable to parse xenstore node %s/backend_type", be_path); + goto out; + } + } + tmp = NULL; rc = libxl__xs_read_checked(gc, XBT_NULL, GCSPRINTF("%s/grant_usage", be_path), &tmp); @@ -200,6 +226,7 @@ static LIBXL_DEFINE_UPDATE_DEVID(virtio) #define libxl_device_virtio_compare NULL DEFINE_DEVICE_TYPE_STRUCT(virtio, VIRTIO, virtios, + .dm_needed = libxl__device_virtio_dm_needed, .set_xenstore_config = (device_set_xenstore_config_fn_t) libxl__set_xenstore_virtio, .from_xenstore = (device_from_xenstore_fn_t)libxl__virtio_from_xenstore, diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c index 4544ce2307..234cef5f7e 100644 --- a/tools/xl/xl_parse.c +++ b/tools/xl/xl_parse.c @@ -1215,6 +1215,9 @@ static int parse_virtio_config(libxl_device_virtio *virtio, char *token) } else if (MATCH_OPTION("transport", token, oparg)) { rc = libxl_virtio_transport_from_string(oparg, &virtio->transport); if (rc) return rc; + } else if (MATCH_OPTION("backend_type", token, oparg)) { + rc = libxl_virtio_backend_from_string(oparg, &virtio->backend_type); + if (rc) return rc; } else if (MATCH_OPTION("grant_usage", token, oparg)) { libxl_defbool_set(&virtio->grant_usage, strtoul(oparg, NULL, 0)); } else if (MATCH_OPTION("bdf", token, oparg)) {