From patchwork Tue Jul 21 07:30:29 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen (ThunderTown)" X-Patchwork-Id: 6832581 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 916CFC05AC for ; Tue, 21 Jul 2015 07:34:56 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 7602820490 for ; Tue, 21 Jul 2015 07:34:55 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5FE5520138 for ; Tue, 21 Jul 2015 07:34:54 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZHS32-0005Sg-Cg; Tue, 21 Jul 2015 07:33:08 +0000 Received: from szxga02-in.huawei.com ([119.145.14.65]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZHS2S-0004y8-Ky for linux-arm-kernel@lists.infradead.org; Tue, 21 Jul 2015 07:32:35 +0000 Received: from 172.24.1.50 (EHLO szxeml427-hub.china.huawei.com) ([172.24.1.50]) by szxrg02-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id CPB14238; Tue, 21 Jul 2015 15:31:25 +0800 (CST) Received: from localhost (10.177.23.164) by szxeml427-hub.china.huawei.com (10.82.67.182) with Microsoft SMTP Server id 14.3.158.1; Tue, 21 Jul 2015 15:31:15 +0800 From: Zhen Lei To: Will Deacon , Joerg Roedel , linux-arm-kernel , iommu Subject: [PATCH v3 1/5] iommu/arm-smmu: to support probe deferral Date: Tue, 21 Jul 2015 15:30:29 +0800 Message-ID: <1437463833-16112-2-git-send-email-thunder.leizhen@huawei.com> X-Mailer: git-send-email 1.9.5.msysgit.1 In-Reply-To: <1437463833-16112-1-git-send-email-thunder.leizhen@huawei.com> References: <1437463833-16112-1-git-send-email-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.23.164] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150721_003233_208713_E0D301C6 X-CRM114-Status: GOOD ( 22.34 ) X-Spam-Score: -5.4 (-----) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Xinwei Hu , Zhen Lei , Zefan Li , Robin Murphy , Tianhong Ding Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-5.4 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP For pci devices, only the root nodes have "iommus" property. So we should traverse all of its sub nodes in of_xlate. There exists two cases: Case 1: .add_device(sub node) happened before .of_xlate(root node) Case 2: .add_device(sub node) happened after .of_xlate(root node) (1).add_device if (!root->archdata.iommu) return -ENODEV; (2).of_xlate root->archdata.iommu = smmu; /* * Probe the pci devices deferred in phase (1) */ (3).add_device /* * After phase (2), it's not NULL */ if (!root->archdata.iommu) return -ENODEV; __arm_smmu_add_pci_device(pdev, root->archdata.iommu); Reviewed-by: Robin Murphy Signed-off-by: Zhen Lei --- drivers/iommu/arm-smmu-v3.c | 147 +++++++++++++++++++++++++++++++++++--------- 1 file changed, 117 insertions(+), 30 deletions(-) -- 1.8.0 diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index 4f09337..474eca4 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -30,6 +30,8 @@ #include #include #include +#include +#include #include "io-pgtable.h" @@ -1741,32 +1743,37 @@ static void __arm_smmu_release_pci_iommudata(void *data) kfree(data); } -static struct arm_smmu_device *arm_smmu_get_for_pci_dev(struct pci_dev *pdev) +static struct arm_smmu_device *find_smmu_by_device(struct device *dev) +{ + struct device_node *np = dev->of_node; + + /* to ensure np is a smmu device node */ + if (!of_iommu_get_ops(np)) + return NULL; + + return dev->archdata.iommu; +} + +static struct arm_smmu_device *find_smmu_by_node(struct device_node *np) +{ + struct platform_device *pdev; + + pdev = of_find_device_by_node(np); + if (!pdev) + return NULL; + + return find_smmu_by_device(&pdev->dev); +} + +static struct device *arm_smmu_get_pci_dev_root(struct pci_dev *pdev) { - struct device_node *of_node; - struct arm_smmu_device *curr, *smmu = NULL; struct pci_bus *bus = pdev->bus; /* Walk up to the root bus */ while (!pci_is_root_bus(bus)) bus = bus->parent; - /* Follow the "iommus" phandle from the host controller */ - of_node = of_parse_phandle(bus->bridge->parent->of_node, "iommus", 0); - if (!of_node) - return NULL; - - /* See if we can find an SMMU corresponding to the phandle */ - spin_lock(&arm_smmu_devices_lock); - list_for_each_entry(curr, &arm_smmu_devices, list) { - if (curr->dev->of_node == of_node) { - smmu = curr; - break; - } - } - spin_unlock(&arm_smmu_devices_lock); - of_node_put(of_node); - return smmu; + return bus->bridge->parent; } static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid) @@ -1779,27 +1786,21 @@ static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid) return sid < limit; } -static int arm_smmu_add_device(struct device *dev) +static int __arm_smmu_add_device(struct device *dev, u32 sid) { int i, ret; - u32 sid, *sids; - struct pci_dev *pdev; + u32 *sids; struct iommu_group *group; struct arm_smmu_group *smmu_group; struct arm_smmu_device *smmu; - /* We only support PCI, for now */ - if (!dev_is_pci(dev)) - return -ENODEV; - - pdev = to_pci_dev(dev); group = iommu_group_get_for_dev(dev); if (IS_ERR(group)) return PTR_ERR(group); smmu_group = iommu_group_get_iommudata(group); if (!smmu_group) { - smmu = arm_smmu_get_for_pci_dev(pdev); + smmu = dev->archdata.iommu; if (!smmu) { ret = -ENOENT; goto out_put_group; @@ -1819,8 +1820,6 @@ static int arm_smmu_add_device(struct device *dev) smmu = smmu_group->smmu; } - /* Assume SID == RID until firmware tells us otherwise */ - pci_for_each_dma_alias(pdev, __arm_smmu_get_pci_sid, &sid); for (i = 0; i < smmu_group->num_sids; ++i) { /* If we already know about this SID, then we're done */ if (smmu_group->sids[i] == sid) @@ -1857,11 +1856,43 @@ static int arm_smmu_add_device(struct device *dev) out_put_group: iommu_group_put(group); + dev_err(dev, "failed to add into SMMU\n"); return ret; } +static int __arm_smmu_add_pci_device(struct pci_dev *pdev, void *smmu) +{ + u32 sid; + struct device *dev = &pdev->dev; + + /* Assume SID == RID until firmware tells us otherwise */ + pci_for_each_dma_alias(pdev, __arm_smmu_get_pci_sid, &sid); + + dev->archdata.iommu = smmu; + + return __arm_smmu_add_device(dev, sid); +} + +static int arm_smmu_add_device(struct device *dev) +{ + struct pci_dev *pdev; + struct device *root; + + /* only support pci devices hotplug */ + if (!dev_is_pci(dev)) + return -ENODEV; + + pdev = to_pci_dev(dev); + root = arm_smmu_get_pci_dev_root(pdev); + if (!root->archdata.iommu) + return -ENODEV; + + return __arm_smmu_add_pci_device(pdev, root->archdata.iommu); +} + static void arm_smmu_remove_device(struct device *dev) { + dev->archdata.iommu = NULL; iommu_group_remove_device(dev); } @@ -1909,7 +1940,58 @@ out_unlock: return ret; } +static int arm_smmu_of_xlate(struct device *dev, struct of_phandle_args *args) +{ + int ret; + struct arm_smmu_device *smmu; + + /* + * We can sure that args->np is a smmu device node, because this + * function was called by of_xlate hook. + * + * And in arm_smmu_device_dt_probe: + * dev->archdata.iommu = smmu; + * of_iommu_set_ops(smmu->dev->of_node, &arm_smmu_ops); + * + * It seems impossible return NULL in normal times. + */ + smmu = find_smmu_by_node(args->np); + if (!smmu) { + dev_err(dev, "unknown error caused smmu driver crashed\n"); + return -ENODEV; + } + + if (!dev->archdata.iommu) + dev->archdata.iommu = smmu; + + if (dev->archdata.iommu != smmu) { + dev_err(dev, "behinds more than one smmu\n"); + return -EINVAL; + } + + /* We only support PCI, for now */ + if (!dev_is_pci(dev)) { + return -ENODEV; + } else { + struct device *root; + struct pci_dev *pdev = NULL; + + for_each_pci_dev(pdev) { + root = arm_smmu_get_pci_dev_root(pdev); + if (root->of_node != dev->of_node) + continue; + + ret = __arm_smmu_add_pci_device(pdev, smmu); + if (ret) + return ret; + } + } + + return 0; +} + static struct iommu_ops arm_smmu_ops = { + .of_xlate = arm_smmu_of_xlate, .capable = arm_smmu_capable, .domain_alloc = arm_smmu_domain_alloc, .domain_free = arm_smmu_domain_free, @@ -2635,6 +2717,9 @@ static int arm_smmu_device_dt_probe(struct platform_device *pdev) spin_lock(&arm_smmu_devices_lock); list_add(&smmu->list, &arm_smmu_devices); spin_unlock(&arm_smmu_devices_lock); + dev->archdata.iommu = smmu; + of_iommu_set_ops(smmu->dev->of_node, &arm_smmu_ops); + return 0; out_free_structures: @@ -2706,6 +2791,8 @@ static void __exit arm_smmu_exit(void) subsys_initcall(arm_smmu_init); module_exit(arm_smmu_exit); +IOMMU_OF_DECLARE(arm_smmu_v3, "arm,smmu-v3", NULL); + MODULE_DESCRIPTION("IOMMU API for ARM architected SMMUv3 implementations"); MODULE_AUTHOR("Will Deacon "); MODULE_LICENSE("GPL v2");