From patchwork Fri Oct 25 15:01:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Durawa, Szymon" X-Patchwork-Id: 13850750 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BF0ECE574 for ; Fri, 25 Oct 2024 14:00:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.14 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729864844; cv=none; b=dD8TBU4ylfXd0M49bGA3A2oSalnP6y0hSuAX+gpj1ZqQETFxpD1V7SNYbQ1a67GOEGM3aB0AeZIv8nn0DySIlneAc+csrmI32qO0u9ID1Wxv7hSLpYtxgUYa5tW6CMiryveOlTzxpklQ1gdtoIMWKwRjU60r8LXf7skcnOdYyr4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729864844; c=relaxed/simple; bh=uo8AA/LCKkvGN+iJ+NFzMf+VCUYVyj0mlt9AJbeP/V4=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version:Content-Type; b=XBGdhpdy1LkqBN4tVPiufcT+4zjV7muahyD5mBgufFnC460bZ1rFEuHWHy+828UW6+nf4c6+ur8qBkW/6hy1Rre2ahgiVWUYuLC/zjCVvQOCIe9D8UgCuE2927WeT2Pdw5CwptCrPYFjg2TDNc0PXXpmlletuL8yLv/xbxSKWkI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Pt9R9h5j; arc=none smtp.client-ip=192.198.163.14 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Pt9R9h5j" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1729864842; x=1761400842; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=uo8AA/LCKkvGN+iJ+NFzMf+VCUYVyj0mlt9AJbeP/V4=; b=Pt9R9h5jmhswdqIThrPCZsnDX4r/6YAUMldkmukQOdfjwv4ck8VNApxA 2BjRPsm8CFACELi2Kz7tkIj2lCo7XrPQ5t8WRbzBU6ekWpNYzSsRelZoA j3ZHK1ohirMVkUrAQ1z2KLIyw3kC5vTXc91BNgPKqbxH8yDcqTDoWiWqJ rzAKhI9XzqFBY9F/BozzcUId8WM6MDaWS8D8JA8nCisBt8WQ+bB5tkyxv tyh65u4CE0kwLGT3dVYJEizfD0/Mq0vgI6BSZnzGv8uWn8f8O6RnNE0XD TmcggsxgYq4D1s3VWzJGTNRNrLlxHTva66w5M+pwwi+q10RuU4UCE/2QP w==; X-CSE-ConnectionGUID: G9bXuHcXTmS/K/uH5l+sQg== X-CSE-MsgGUID: pWSuqIgHRLmnZ5PEllTHUg== X-IronPort-AV: E=McAfee;i="6700,10204,11236"; a="29752845" X-IronPort-AV: E=Sophos;i="6.11,231,1725346800"; d="scan'208";a="29752845" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Oct 2024 07:00:41 -0700 X-CSE-ConnectionGUID: QxdoVnerRjyBETcLKA0dkw== X-CSE-MsgGUID: ZxvuvlLXQ8Sn7CN65MsS1A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,231,1725346800"; d="scan'208";a="81232291" Received: from arl-s-03.igk.intel.com ([10.91.111.103]) by fmviesa010-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Oct 2024 07:00:39 -0700 From: Szymon Durawa To: helgaas@kernel.org Cc: Szymon Durawa , Dan Williams , Lukas Wunner , linux-pci@vger.kernel.org, Nirmal Patel , Mariusz Tkaczyk Subject: [RFC PATCH v1 1/3] PCI: vmd: Clean up vmd_enable_domain function Date: Fri, 25 Oct 2024 17:01:51 +0200 Message-Id: <20241025150153.983306-2-szymon.durawa@linux.intel.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20241025150153.983306-1-szymon.durawa@linux.intel.com> References: <20241025150153.983306-1-szymon.durawa@linux.intel.com> Precedence: bulk X-Mailing-List: linux-pci@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 This function is too long and needs to be shortened to make it more readable. This clean up is a prework for enablement additional VMD bus range. It doesn't change functional behavior of vmd_enable_domain(). Suggested-by: Nirmal Patel Reviewed-by: Mariusz Tkaczyk Signed-off-by: Szymon Durawa --- drivers/pci/controller/vmd.c | 262 +++++++++++++++++++++-------------- 1 file changed, 161 insertions(+), 101 deletions(-) mode change 100644 => 100755 drivers/pci/controller/vmd.c diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c old mode 100644 new mode 100755 index 264a180403a0..7cce7354b5c2 --- a/drivers/pci/controller/vmd.c +++ b/drivers/pci/controller/vmd.c @@ -34,6 +34,18 @@ #define MB2_SHADOW_OFFSET 0x2000 #define MB2_SHADOW_SIZE 16 +enum vmd_resource { + VMD_RES_CFGBAR = 0, + VMD_RES_MBAR_1, /*VMD Resource MemBAR 1 */ + VMD_RES_MBAR_2, /*VMD Resource MemBAR 2 */ + VMD_RES_COUNT +}; + +enum vmd_rootbus { + VMD_BUS_0 = 0, + VMD_BUS_COUNT +}; + enum vmd_features { /* * Device may contain registers which hint the physical location of the @@ -132,10 +144,10 @@ struct vmd_dev { struct vmd_irq_list *irqs; struct pci_sysdata sysdata; - struct resource resources[3]; + struct resource resources[VMD_RES_COUNT]; struct irq_domain *irq_domain; - struct pci_bus *bus; - u8 busn_start; + struct pci_bus *bus[VMD_BUS_COUNT]; + u8 busn_start[VMD_BUS_COUNT]; u8 first_vec; char *name; int instance; @@ -367,7 +379,7 @@ static void vmd_remove_irq_domain(struct vmd_dev *vmd) static void __iomem *vmd_cfg_addr(struct vmd_dev *vmd, struct pci_bus *bus, unsigned int devfn, int reg, int len) { - unsigned int busnr_ecam = bus->number - vmd->busn_start; + unsigned int busnr_ecam = bus->number - vmd->busn_start[VMD_BUS_0]; u32 offset = PCIE_ECAM_OFFSET(busnr_ecam, devfn, reg); if (offset + len >= resource_size(&vmd->dev->resource[VMD_CFGBAR])) @@ -505,7 +517,7 @@ static inline void vmd_acpi_end(void) { } static void vmd_domain_reset(struct vmd_dev *vmd) { - u16 bus, max_buses = resource_size(&vmd->resources[0]); + u16 bus, max_buses = resource_size(&vmd->resources[VMD_RES_CFGBAR]); u8 dev, functions, fn, hdr_type; char __iomem *base; @@ -553,8 +565,8 @@ static void vmd_domain_reset(struct vmd_dev *vmd) static void vmd_attach_resources(struct vmd_dev *vmd) { - vmd->dev->resource[VMD_MEMBAR1].child = &vmd->resources[1]; - vmd->dev->resource[VMD_MEMBAR2].child = &vmd->resources[2]; + vmd->dev->resource[VMD_MEMBAR1].child = &vmd->resources[VMD_RES_MBAR_1]; + vmd->dev->resource[VMD_MEMBAR2].child = &vmd->resources[VMD_RES_MBAR_2]; } static void vmd_detach_resources(struct vmd_dev *vmd) @@ -644,13 +656,13 @@ static int vmd_get_bus_number_start(struct vmd_dev *vmd) switch (BUS_RESTRICT_CFG(reg)) { case 0: - vmd->busn_start = 0; + vmd->busn_start[VMD_BUS_0] = 0; break; case 1: - vmd->busn_start = 128; + vmd->busn_start[VMD_BUS_0] = 128; break; case 2: - vmd->busn_start = 224; + vmd->busn_start[VMD_BUS_0] = 224; break; default: pci_err(dev, "Unknown Bus Offset Setting (%d)\n", @@ -767,17 +779,126 @@ static int vmd_pm_enable_quirk(struct pci_dev *pdev, void *userdata) return 0; } -static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) +static void vmd_configure_cfgbar(struct vmd_dev *vmd) { - struct pci_sysdata *sd = &vmd->sysdata; - struct resource *res; + struct resource *res = &vmd->dev->resource[VMD_CFGBAR]; + + vmd->resources[VMD_RES_CFGBAR] = (struct resource){ + .name = "VMD CFGBAR", + .start = vmd->busn_start[VMD_BUS_0], + .end = vmd->busn_start[VMD_BUS_0] + + (resource_size(res) >> 20) - 1, + .flags = IORESOURCE_BUS | IORESOURCE_PCI_FIXED, + }; +} + +/** + * vmd_configure_membar - Configure VMD MemBAR register, which points + * to MMIO address assigned by the OS or BIOS. + * @vmd: the VMD device + * @resource_number: resource buffer number to be filled in + * @membar_number: number of the MemBAR + * @start_offset: 4K aligned offset applied to start of VMD’s MEMBAR MMIO space + * @end_offset: 4K aligned offset applied to end of VMD’s MEMBAR MMIO space + * @parent: resource assigned as a parent, may be NULL + * + * Function fills resource buffer inside the VMD structure. + */ +static void vmd_configure_membar(struct vmd_dev *vmd, + enum vmd_resource resource_number, + u8 membar_number, resource_size_t start_offset, + resource_size_t end_offset, + struct resource *parent) +{ + struct resource *res_parent; u32 upper_bits; unsigned long flags; + char name[16]; + + struct resource *res = &vmd->dev->resource[membar_number]; + + upper_bits = upper_32_bits(res->end); + flags = res->flags & ~IORESOURCE_SIZEALIGN; + if (!upper_bits) + flags &= ~IORESOURCE_MEM_64; + + snprintf(name, sizeof(name), "VMD MEMBAR%d", membar_number/2); + + res_parent = parent; + if (!res_parent) + res_parent = res; + + vmd->resources[resource_number] = (struct resource){ + .name = name, + .start = res->start + start_offset, + .end = res->end - end_offset, + .flags = flags, + .parent = res_parent, + }; +} + +static void vmd_configure_membar1_membar2(struct vmd_dev *vmd, + resource_size_t mbar2_ofs) +{ + vmd_configure_membar(vmd, VMD_RES_MBAR_1, VMD_MEMBAR1, 0, 0, NULL); + vmd_configure_membar(vmd, VMD_RES_MBAR_2, VMD_MEMBAR2, + mbar2_ofs, 0, NULL); +} + +static void vmd_bus_enumeration(struct pci_bus *bus, unsigned long features) +{ + struct pci_bus *child; + struct pci_dev *dev; + int ret; + + vmd_acpi_begin(); + + pci_scan_child_bus(bus); + vmd_domain_reset(vmd_from_bus(bus)); + + /* + * When Intel VMD is enabled, the OS does not discover the Root Ports + * owned by Intel VMD within the MMCFG space. pci_reset_bus() applies + * a reset to the parent of the PCI device supplied as argument. This + * is why we pass a child device, so the reset can be triggered at + * the Intel bridge level and propagated to all the children in the + * hierarchy. + */ + list_for_each_entry(child, &bus->children, node) { + if (!list_empty(&child->devices)) { + dev = list_first_entry(&child->devices, struct pci_dev, + bus_list); + ret = pci_reset_bus(dev); + if (ret) + pci_warn(dev, "can't reset device: %d\n", ret); + + break; + } + } + + pci_assign_unassigned_bus_resources(bus); + + pci_walk_bus(bus, vmd_pm_enable_quirk, &features); + + /* + * VMD root buses are virtual and don't return true on pci_is_pcie() + * and will fail pcie_bus_configure_settings() early. It can instead be + * run on each of the real root ports. + */ + list_for_each_entry(child, &bus->children, node) + pcie_bus_configure_settings(child); + + pci_bus_add_devices(bus); + + vmd_acpi_end(); +} + +static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) +{ + struct pci_sysdata *sd = &vmd->sysdata; LIST_HEAD(resources); resource_size_t offset[2] = {0}; resource_size_t membar2_offset = 0x2000; - struct pci_bus *child; - struct pci_dev *dev; int ret; /* @@ -807,13 +928,7 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) return ret; } - res = &vmd->dev->resource[VMD_CFGBAR]; - vmd->resources[0] = (struct resource) { - .name = "VMD CFGBAR", - .start = vmd->busn_start, - .end = vmd->busn_start + (resource_size(res) >> 20) - 1, - .flags = IORESOURCE_BUS | IORESOURCE_PCI_FIXED, - }; + vmd_configure_cfgbar(vmd); /* * If the window is below 4GB, clear IORESOURCE_MEM_64 so we can @@ -827,36 +942,12 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) * * The only way we could use a 64-bit non-prefetchable MEMBAR is * if its address is <4GB so that we can convert it to a 32-bit - * resource. To be visible to the host OS, all VMD endpoints must + * resource. To be visible to the host OS, all VMD endpoints must * be initially configured by platform BIOS, which includes setting - * up these resources. We can assume the device is configured + * up these resources. We can assume the device is configured * according to the platform needs. */ - res = &vmd->dev->resource[VMD_MEMBAR1]; - upper_bits = upper_32_bits(res->end); - flags = res->flags & ~IORESOURCE_SIZEALIGN; - if (!upper_bits) - flags &= ~IORESOURCE_MEM_64; - vmd->resources[1] = (struct resource) { - .name = "VMD MEMBAR1", - .start = res->start, - .end = res->end, - .flags = flags, - .parent = res, - }; - - res = &vmd->dev->resource[VMD_MEMBAR2]; - upper_bits = upper_32_bits(res->end); - flags = res->flags & ~IORESOURCE_SIZEALIGN; - if (!upper_bits) - flags &= ~IORESOURCE_MEM_64; - vmd->resources[2] = (struct resource) { - .name = "VMD MEMBAR2", - .start = res->start + membar2_offset, - .end = res->end, - .flags = flags, - .parent = res, - }; + vmd_configure_membar1_membar2(vmd, membar2_offset); sd->vmd_dev = vmd->dev; sd->domain = vmd_find_free_domain(); @@ -892,70 +983,39 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) vmd_set_msi_remapping(vmd, false); } - pci_add_resource(&resources, &vmd->resources[0]); - pci_add_resource_offset(&resources, &vmd->resources[1], offset[0]); - pci_add_resource_offset(&resources, &vmd->resources[2], offset[1]); + pci_add_resource(&resources, &vmd->resources[VMD_RES_CFGBAR]); + pci_add_resource_offset(&resources, &vmd->resources[VMD_RES_MBAR_1], + offset[0]); + pci_add_resource_offset(&resources, &vmd->resources[VMD_RES_MBAR_2], + offset[1]); - vmd->bus = pci_create_root_bus(&vmd->dev->dev, vmd->busn_start, - &vmd_ops, sd, &resources); - if (!vmd->bus) { + vmd->bus[VMD_BUS_0] = pci_create_root_bus(&vmd->dev->dev, + vmd->busn_start[VMD_BUS_0], + &vmd_ops, sd, &resources); + if (!vmd->bus[VMD_BUS_0]) { pci_free_resource_list(&resources); vmd_remove_irq_domain(vmd); return -ENODEV; } - vmd_copy_host_bridge_flags(pci_find_host_bridge(vmd->dev->bus), - to_pci_host_bridge(vmd->bus->bridge)); + vmd_copy_host_bridge_flags( + pci_find_host_bridge(vmd->dev->bus), + to_pci_host_bridge(vmd->bus[VMD_BUS_0]->bridge)); vmd_attach_resources(vmd); if (vmd->irq_domain) - dev_set_msi_domain(&vmd->bus->dev, vmd->irq_domain); + dev_set_msi_domain(&vmd->bus[VMD_BUS_0]->dev, + vmd->irq_domain); else - dev_set_msi_domain(&vmd->bus->dev, + dev_set_msi_domain(&vmd->bus[VMD_BUS_0]->dev, dev_get_msi_domain(&vmd->dev->dev)); - WARN(sysfs_create_link(&vmd->dev->dev.kobj, &vmd->bus->dev.kobj, - "domain"), "Can't create symlink to domain\n"); - - vmd_acpi_begin(); + WARN(sysfs_create_link(&vmd->dev->dev.kobj, + &vmd->bus[VMD_BUS_0]->dev.kobj, "domain"), + "Can't create symlink to domain\n"); - pci_scan_child_bus(vmd->bus); - vmd_domain_reset(vmd); + vmd_bus_enumeration(vmd->bus[VMD_BUS_0], features); - /* When Intel VMD is enabled, the OS does not discover the Root Ports - * owned by Intel VMD within the MMCFG space. pci_reset_bus() applies - * a reset to the parent of the PCI device supplied as argument. This - * is why we pass a child device, so the reset can be triggered at - * the Intel bridge level and propagated to all the children in the - * hierarchy. - */ - list_for_each_entry(child, &vmd->bus->children, node) { - if (!list_empty(&child->devices)) { - dev = list_first_entry(&child->devices, - struct pci_dev, bus_list); - ret = pci_reset_bus(dev); - if (ret) - pci_warn(dev, "can't reset device: %d\n", ret); - - break; - } - } - - pci_assign_unassigned_bus_resources(vmd->bus); - - pci_walk_bus(vmd->bus, vmd_pm_enable_quirk, &features); - - /* - * VMD root buses are virtual and don't return true on pci_is_pcie() - * and will fail pcie_bus_configure_settings() early. It can instead be - * run on each of the real root ports. - */ - list_for_each_entry(child, &vmd->bus->children, node) - pcie_bus_configure_settings(child); - - pci_bus_add_devices(vmd->bus); - - vmd_acpi_end(); return 0; } @@ -1031,9 +1091,9 @@ static void vmd_remove(struct pci_dev *dev) { struct vmd_dev *vmd = pci_get_drvdata(dev); - pci_stop_root_bus(vmd->bus); + pci_stop_root_bus(vmd->bus[VMD_BUS_0]); sysfs_remove_link(&vmd->dev->dev.kobj, "domain"); - pci_remove_root_bus(vmd->bus); + pci_remove_root_bus(vmd->bus[VMD_BUS_0]); vmd_cleanup_srcu(vmd); vmd_detach_resources(vmd); vmd_remove_irq_domain(vmd);