From patchwork Tue Nov 19 09:33:10 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hiroshi DOYU X-Patchwork-Id: 3200691 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id D58DD9F243 for ; Tue, 19 Nov 2013 10:02:40 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id C9C76202FF for ; Tue, 19 Nov 2013 10:02:35 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 564AD2025A for ; Tue, 19 Nov 2013 10:02:34 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1VihjJ-00020S-NI; Tue, 19 Nov 2013 09:36:23 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Vihib-00025Z-HQ; Tue, 19 Nov 2013 09:35:37 +0000 Received: from hqemgate15.nvidia.com ([216.228.121.64]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1Vihhw-0001zU-Uj for linux-arm-kernel@lists.infradead.org; Tue, 19 Nov 2013 09:35:03 +0000 Received: from hqnvupgp07.nvidia.com (Not Verified[216.228.121.13]) by hqemgate15.nvidia.com id ; Tue, 19 Nov 2013 01:32:43 -0800 Received: from hqemhub03.nvidia.com ([172.20.12.94]) by hqnvupgp07.nvidia.com (PGP Universal service); Tue, 19 Nov 2013 01:32:13 -0800 X-PGP-Universal: processed; by hqnvupgp07.nvidia.com on Tue, 19 Nov 2013 01:32:13 -0800 Received: from hqnvemgw02.nvidia.com (172.16.227.111) by HQEMHUB03.nvidia.com (172.20.150.15) with Microsoft SMTP Server id 8.3.327.1; Tue, 19 Nov 2013 01:33:44 -0800 Received: from sc-daphne.nvidia.com (Not Verified[172.20.232.60]) by hqnvemgw02.nvidia.com with MailMarshal (v7,1,2,5326) id ; Tue, 19 Nov 2013 01:33:44 -0800 Received: from oreo.Nvidia.com (dhcp-10-21-26-134.nvidia.com [10.21.26.134]) by sc-daphne.nvidia.com (8.13.8+Sun/8.8.8) with ESMTP id rAJ9XHVS021971; Tue, 19 Nov 2013 01:33:41 -0800 (PST) From: Hiroshi Doyu To: , , , , , Subject: [PATCHv5 6/9] iommu/tegra: smmu: get swgroups from DT "iommus=" Date: Tue, 19 Nov 2013 11:33:10 +0200 Message-ID: <1384853593-32202-7-git-send-email-hdoyu@nvidia.com> X-Mailer: git-send-email 1.8.1.5 In-Reply-To: <1384853593-32202-1-git-send-email-hdoyu@nvidia.com> References: <1384853593-32202-1-git-send-email-hdoyu@nvidia.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20131119_043457_505240_13A85F85 X-CRM114-Status: GOOD ( 19.27 ) X-Spam-Score: -2.4 (--) Cc: mark.rutland@arm.com, devicetree@vger.kernel.org, lorenzo.pieralisi@arm.com, linux-kernel@vger.kernel.org, iommu@lists.linux-foundation.org, linux-tegra@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Hiroshi Doyu X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.7 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This provides the info about which swgroups a device belongs to. This info is passed from DT. This is necessary for the unified SMMU driver among Tegra SoCs since each has different H/W accelerators. Signed-off-by: Hiroshi Doyu --- v5: "iommu=" in a device DT is used instead of "mmu-masters" in an iommu DT. This is "iommu=" version of: [PATCHv4 5/7] iommu/tegra: smmu: Support "mmu-masters" binding --- .../bindings/iommu/nvidia,tegra30-smmu.txt | 17 ++- drivers/iommu/tegra-smmu.c | 125 ++++++++++++++++++--- 2 files changed, 126 insertions(+), 16 deletions(-) diff --git a/Documentation/devicetree/bindings/iommu/nvidia,tegra30-smmu.txt b/Documentation/devicetree/bindings/iommu/nvidia,tegra30-smmu.txt index 89fb543..44a4dc3 100644 --- a/Documentation/devicetree/bindings/iommu/nvidia,tegra30-smmu.txt +++ b/Documentation/devicetree/bindings/iommu/nvidia,tegra30-smmu.txt @@ -8,9 +8,12 @@ Required properties: - nvidia,#asids : # of ASIDs - dma-window : IOVA start address and length. - nvidia,ahb : phandle to the ahb bus connected to SMMU. +- iommus: phandle to an iommu device which a device is + attached to and indicates which swgroups a device belongs to(SWGROUP ID). + SWGROUP ID is from 0 to 63, and a device can belong to multiple SWGROUPS. Example: - smmu { + smmu: iommu { compatible = "nvidia,tegra30-smmu"; reg = <0x7000f010 0x02c 0x7000f1f0 0x010 @@ -18,4 +21,16 @@ Example: nvidia,#asids = <4>; /* # of ASIDs */ dma-window = <0 0x40000000>; /* IOVA start & length */ nvidia,ahb = <&ahb>; + #iommu-cells = <2>; }; + + host1x { + compatible = "nvidia,tegra30-host1x", "simple-bus"; + iommus = <&smmu TEGRA_SWGROUP_CELLS(HC)>; + .... + gr3d { + compatible = "nvidia,tegra30-gr3d"; + nvidia,memory-clients = <&smmu TEGRA_SWGROUP_CELLS(NV) + TEGRA_SWGROUP_CELLS(NV2)>; + .... + }; diff --git a/drivers/iommu/tegra-smmu.c b/drivers/iommu/tegra-smmu.c index e999ad0..e915201a 100644 --- a/drivers/iommu/tegra-smmu.c +++ b/drivers/iommu/tegra-smmu.c @@ -190,6 +190,8 @@ enum { * Per client for address space */ struct smmu_client { + struct device_node *of_node; + struct rb_node node; struct device *dev; struct list_head list; struct smmu_as *as; @@ -233,6 +235,7 @@ struct smmu_device { spinlock_t lock; char *name; struct device *dev; + struct rb_root clients; struct page *avp_vector_page; /* dummy page shared by all AS's */ /* @@ -310,6 +313,96 @@ static inline void smmu_write(struct smmu_device *smmu, u32 val, size_t offs) */ #define FLUSH_SMMU_REGS(smmu) smmu_read(smmu, SMMU_CONFIG) +static struct smmu_client *find_smmu_client(struct smmu_device *smmu, + struct device_node *dev_node) +{ + struct rb_node *node = smmu->clients.rb_node; + + while (node) { + struct smmu_client *client; + + client = container_of(node, struct smmu_client, node); + if (dev_node < client->of_node) + node = node->rb_left; + else if (dev_node > client->of_node) + node = node->rb_right; + else + return client; + } + + return NULL; +} + +static int insert_smmu_client(struct smmu_device *smmu, + struct smmu_client *client) +{ + struct rb_node **new, *parent; + + new = &smmu->clients.rb_node; + parent = NULL; + while (*new) { + struct smmu_client *this; + this = container_of(*new, struct smmu_client, node); + + parent = *new; + if (client->of_node < this->of_node) + new = &((*new)->rb_left); + else if (client->of_node > this->of_node) + new = &((*new)->rb_right); + else + return -EEXIST; + } + + rb_link_node(&client->node, parent, new); + rb_insert_color(&client->node, &smmu->clients); + return 0; +} + +static int register_smmu_client(struct smmu_device *smmu, + struct device *dev, unsigned long *swgroups) +{ + struct smmu_client *client; + + client = find_smmu_client(smmu, dev->of_node); + if (client) { + dev_err(dev, + "rejecting multiple registrations for client device %s\n", + dev->of_node->full_name); + return -EBUSY; + } + + client = devm_kzalloc(smmu->dev, sizeof(*client), GFP_KERNEL); + if (!client) + return -ENOMEM; + + client->dev = dev; + client->of_node = dev->of_node; + memcpy(client->hwgrp, swgroups, sizeof(u64)); + return insert_smmu_client(smmu, client); +} + +static int smmu_of_get_swgroups(struct device *dev, unsigned long *swgroups) +{ + int i; + struct of_phandle_args args; + + of_property_for_each_phandle_with_args(dev->of_node, "iommus", + "#iommu-cells", i, &args) { + if (args.np != smmu_handle->dev->of_node) + continue; + + BUG_ON(args.args_count != 2); + + memcpy(swgroups, args.args, sizeof(u64)); + pr_debug("swgroups=%08lx %08lx ops=%p %s\n", + swgroups[0], swgroups[1], + dev->bus->iommu_ops, dev_name(dev)); + return 0; + } + + return -ENODEV; +} + static int __smmu_client_set_hwgrp(struct smmu_client *c, unsigned long *map, int on) { @@ -719,21 +812,16 @@ static int smmu_iommu_attach_dev(struct iommu_domain *domain, struct smmu_as *as = domain->priv; struct smmu_device *smmu = as->smmu; struct smmu_client *client, *c; - unsigned long *map; int err; - client = devm_kzalloc(smmu->dev, sizeof(*c), GFP_KERNEL); + client = find_smmu_client(smmu, dev->of_node); if (!client) return -ENOMEM; - client->dev = dev; - client->as = as; - map = (unsigned long *)dev->platform_data; - if (!map) - return -EINVAL; - err = smmu_client_enable_hwgrp(client, map); + client->as = as; + err = smmu_client_enable_hwgrp(client, client->hwgrp); if (err) - goto err_hwgrp; + return -EINVAL; spin_lock(&as->client_lock); list_for_each_entry(c, &as->client, list) { @@ -751,7 +839,7 @@ static int smmu_iommu_attach_dev(struct iommu_domain *domain, * Reserve "page zero" for AVP vectors using a common dummy * page. */ - if (test_bit(TEGRA_SWGROUP_AVPC, map)) { + if (test_bit(TEGRA_SWGROUP_AVPC, client->hwgrp)) { struct page *page; page = as->smmu->avp_vector_page; @@ -766,8 +854,6 @@ static int smmu_iommu_attach_dev(struct iommu_domain *domain, err_client: smmu_client_disable_hwgrp(client); spin_unlock(&as->client_lock); -err_hwgrp: - devm_kfree(smmu->dev, client); return err; } @@ -784,7 +870,6 @@ static void smmu_iommu_detach_dev(struct iommu_domain *domain, if (c->dev == dev) { smmu_client_disable_hwgrp(c); list_del(&c->list); - devm_kfree(smmu->dev, c); c->as = NULL; dev_dbg(smmu->dev, "%s is detached\n", dev_name(c->dev)); @@ -896,6 +981,15 @@ static int smmu_iommu_add_device(struct device *dev) if (err) return -ENODEV; + if (!find_smmu_client(smmu_handle, dev->of_node)) { + err = register_smmu_client(smmu_handle, dev, swgroups); + if (err) { + dev_err(dev, "failed to add client %s\n", + dev_name(dev)); + return -EINVAL; + } + } + if (test_bit(TEGRA_SWGROUP_PPCS, swgroups)) map = smmu_handle->map[SYSTEM_PROTECTED]; else @@ -906,8 +1000,8 @@ static int smmu_iommu_add_device(struct device *dev) else return -EPROBE_DEFER; - pr_debug("swgroups=%08lx map=%p err=%d %s\n", - swgroups, map, err, dev_name(dev)); + pr_debug("swgroups=%08lx %08lx map=%p err=%d %s\n", + swgroups[0], swgroups[1], map, err, dev_name(dev)); return err; } @@ -1160,6 +1254,7 @@ static int tegra_smmu_probe(struct platform_device *pdev) return -ENOMEM; } + smmu->clients = RB_ROOT; smmu->map = (struct dma_iommu_mapping **)(smmu->as + asids); smmu->nregs = pdev->num_resources; smmu->regs = devm_kzalloc(dev, 2 * smmu->nregs * sizeof(*smmu->regs),