From patchwork Mon Sep 9 17:03:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Kiszka X-Patchwork-Id: 13797326 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 904DEECE579 for ; Mon, 9 Sep 2024 17:05:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc: To:From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=zzmLaklWQjTzvGkzTiEWxr4awCiJVS73Qbgs4hUWCQY=; b=D1lzY8Ibs4FYzGgUEpp0bzk4ZC lGqnUYvdbIeTRw0vQZhkLqi4Tc8l3tToEU1SkITU18EsaUcOSgXasKvZjllO5cY2IQlT7UuZKzg/C o/wR4Puoiresl2G/ZTNOwHr1uIwcfUAZ37XWjHIhSyw4zZVYeNcnkGZmSen1ayb4B7h8HNxcWnkbW RigLOnrwOLACqkotWILC53fMBy5LYUDtDhcymZGn2apxxobuIjULxearWmvWHoLY+c8GQukzK3KpY j8XPsMvk3IGcMAYmyEBQRuf+TlEh5XDKuXHz6NKY+XWKz/4NNJCUmYqTqiYC3VS5d5QmIVvRNAL7F MXbTjjww==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1snhpQ-00000002kqc-1QlM; Mon, 09 Sep 2024 17:05:28 +0000 Received: from mta-64-227.siemens.flowmailer.net ([185.136.64.227]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1snho6-00000002kOA-2OGG for linux-arm-kernel@lists.infradead.org; Mon, 09 Sep 2024 17:04:08 +0000 Received: by mta-64-227.siemens.flowmailer.net with ESMTPSA id 20240909170403373e0a3044e9dfa33d for ; Mon, 09 Sep 2024 19:04:03 +0200 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; s=fm1; d=siemens.com; i=jan.kiszka@siemens.com; h=Date:From:Subject:To:Message-ID:MIME-Version:Content-Type:Content-Transfer-Encoding:Cc:References:In-Reply-To; bh=zzmLaklWQjTzvGkzTiEWxr4awCiJVS73Qbgs4hUWCQY=; b=JvtFVpmOU/nZ3T8MmNSNkT4n9oYS8FZb+oMF/577McylfTOnuacrQs+pEhPgVkkJFHFcKC e8FOARl3wiTJ7bmlmr7xzcy1Bh8n+YBMroAaq9e71SmMvJRxlwg0KtXQvypxIaHJHFOD/eGc 1VLwTS7PhYKZ7zzpXJ/ntXaQljmGWFHIOIHsoqVahUy9a6Wk/ulUl8S/lDfQ2G9oh+jX0u15 LGlFM7bs9BLaAFt/ZPJ07DS7nl9eY9lUT0cqdPvCnZqyN+3Q4+0CLX9hcTWQverMAMUdNE87 YBbXkbpNeHpCODgW2OUe+aeE3OTqgZ8npCZZqCm3dRBwRQ6Q8q8c1+yQ==; From: Jan Kiszka To: Nishanth Menon , Santosh Shilimkar , Vignesh Raghavendra , Tero Kristo , Rob Herring , Krzysztof Kozlowski , Conor Dooley , devicetree@vger.kernel.org, linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, Siddharth Vadapalli , Bao Cheng Su , Hua Qian Li , Diogo Ivo , Lorenzo Pieralisi , =?utf-8?q?Krzysztof_Wilczy=C5=84ski?= , Bjorn Helgaas Subject: [PATCH v6 4/7] PCI: keystone: Add support for PVU-based DMA isolation on AM654 Date: Mon, 9 Sep 2024 19:03:57 +0200 Message-ID: In-Reply-To: References: MIME-Version: 1.0 X-Flowmailer-Platform: Siemens Feedback-ID: 519:519-294854:519-21489:flowmailer X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240909_100407_102824_D0DA6164 X-CRM114-Status: GOOD ( 21.78 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Jan Kiszka The AM654 lacks an IOMMU, thus does not support isolating DMA requests from untrusted PCI devices to selected memory regions this way. Use static PVU-based protection instead. The PVU, when enabled, will only accept DMA requests that address previously configured regions. Use the availability of a restricted-dma-pool memory region as trigger and register it as valid DMA target with the PVU. In addition, enable the mapping of requester IDs to VirtIDs in the PCI RC. Use only a single VirtID so far, catching all devices. This may be extended later on. Signed-off-by: Jan Kiszka Acked-by: Bjorn Helgaas --- CC: Lorenzo Pieralisi CC: "Krzysztof WilczyƄski" CC: Bjorn Helgaas CC: linux-pci@vger.kernel.org --- drivers/pci/controller/dwc/pci-keystone.c | 108 ++++++++++++++++++++++ 1 file changed, 108 insertions(+) diff --git a/drivers/pci/controller/dwc/pci-keystone.c b/drivers/pci/controller/dwc/pci-keystone.c index 2219b1a866fa..a5954cae6d5d 100644 --- a/drivers/pci/controller/dwc/pci-keystone.c +++ b/drivers/pci/controller/dwc/pci-keystone.c @@ -19,6 +19,7 @@ #include #include #include +#include #include #include #include @@ -26,6 +27,7 @@ #include #include #include +#include #include "../../pci.h" #include "pcie-designware.h" @@ -111,6 +113,16 @@ #define PCI_DEVICE_ID_TI_AM654X 0xb00c +#define KS_PCI_VIRTID 0 + +#define PCIE_VMAP_xP_CTRL 0x0 +#define PCIE_VMAP_xP_REQID 0x4 +#define PCIE_VMAP_xP_VIRTID 0x8 + +#define PCIE_VMAP_xP_CTRL_EN BIT(0) + +#define PCIE_VMAP_xP_VIRTID_VID_MASK 0xfff + struct ks_pcie_of_data { enum dw_pcie_device_mode mode; const struct dw_pcie_host_ops *host_ops; @@ -1125,6 +1137,96 @@ static const struct of_device_id ks_pcie_of_match[] = { { }, }; +#ifdef CONFIG_TI_PVU +static int ks_init_vmap(struct platform_device *pdev, const char *vmap_name) +{ + struct resource *res; + void __iomem *base; + u32 val; + + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, vmap_name); + base = devm_pci_remap_cfg_resource(&pdev->dev, res); + if (IS_ERR(base)) + return PTR_ERR(base); + + writel(0, base + PCIE_VMAP_xP_REQID); + + val = readl(base + PCIE_VMAP_xP_VIRTID); + val &= ~PCIE_VMAP_xP_VIRTID_VID_MASK; + val |= KS_PCI_VIRTID; + writel(val, base + PCIE_VMAP_xP_VIRTID); + + val = readl(base + PCIE_VMAP_xP_CTRL); + val |= PCIE_VMAP_xP_CTRL_EN; + writel(val, base + PCIE_VMAP_xP_CTRL); + + return 0; +} + +static int ks_init_restricted_dma(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct of_phandle_iterator it; + struct resource phys; + int err; + + /* Only process the first restricted dma pool, more are not allowed */ + of_for_each_phandle(&it, err, dev->of_node, "memory-region", + NULL, 0) { + if (of_device_is_compatible(it.node, "restricted-dma-pool")) + break; + } + if (err) + return err == -ENOENT ? 0 : err; + + err = of_address_to_resource(it.node, 0, &phys); + if (err < 0) { + dev_err(dev, "failed to parse memory region %pOF: %d\n", + it.node, err); + return 0; + } + + /* Map all incoming requests on low and high prio port to virtID 0 */ + err = ks_init_vmap(pdev, "vmap_lp"); + if (err) + return err; + err = ks_init_vmap(pdev, "vmap_hp"); + if (err) + return err; + + /* + * Enforce DMA pool usage with the help of the PVU. + * Any request outside will be dropped and raise an error at the PVU. + */ + return ti_pvu_create_region(KS_PCI_VIRTID, &phys); +} + +static void ks_release_restricted_dma(struct platform_device *pdev) +{ + struct of_phandle_iterator it; + struct resource phys; + int err; + + of_for_each_phandle(&it, err, pdev->dev.of_node, "memory-region", + NULL, 0) { + if (of_device_is_compatible(it.node, "restricted-dma-pool") && + of_address_to_resource(it.node, 0, &phys) == 0) { + ti_pvu_remove_region(KS_PCI_VIRTID, &phys); + break; + } + } +} +#else +static inline int ks_init_restricted_dma(struct platform_device *pdev) +{ + return 0; +} + +static inline void ks_release_restricted_dma(struct platform_device *pdev) +{ +} +#endif + static int ks_pcie_probe(struct platform_device *pdev) { const struct dw_pcie_host_ops *host_ops; @@ -1273,6 +1375,10 @@ static int ks_pcie_probe(struct platform_device *pdev) if (ret < 0) goto err_get_sync; + ret = ks_init_restricted_dma(pdev); + if (ret < 0) + goto err_get_sync; + switch (mode) { case DW_PCIE_RC_TYPE: if (!IS_ENABLED(CONFIG_PCI_KEYSTONE_HOST)) { @@ -1354,6 +1460,8 @@ static void ks_pcie_remove(struct platform_device *pdev) int num_lanes = ks_pcie->num_lanes; struct device *dev = &pdev->dev; + ks_release_restricted_dma(pdev); + pm_runtime_put(dev); pm_runtime_disable(dev); ks_pcie_disable_phy(ks_pcie);