From patchwork Mon Feb 27 19:54:12 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 9593899 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 7E5AF60471 for ; Mon, 27 Feb 2017 20:06:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 70BAA28304 for ; Mon, 27 Feb 2017 20:06:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 658862849E; Mon, 27 Feb 2017 20:06:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EAE8828304 for ; Mon, 27 Feb 2017 20:06:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751486AbdB0UGE (ORCPT ); Mon, 27 Feb 2017 15:06:04 -0500 Received: from foss.arm.com ([217.140.101.70]:58356 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751485AbdB0UGC (ORCPT ); Mon, 27 Feb 2017 15:06:02 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 02BDD344; Mon, 27 Feb 2017 11:57:55 -0800 (PST) Received: from e106794-lin.cambridge.arm.com (e106794-lin.cambridge.arm.com [10.1.210.60]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 58E223F3E1; Mon, 27 Feb 2017 11:57:52 -0800 (PST) From: Jean-Philippe Brucker Cc: Harv Abdulhamid , Will Deacon , Shanker Donthineni , Bjorn Helgaas , Sinan Kaya , Lorenzo Pieralisi , Catalin Marinas , Robin Murphy , Joerg Roedel , Nate Watterson , Alex Williamson , David Woodhouse , linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, iommu@lists.linux-foundation.org, kvm@vger.kernel.org Subject: [RFC PATCH 01/30] iommu/arm-smmu-v3: Link groups and devices Date: Mon, 27 Feb 2017 19:54:12 +0000 Message-Id: <20170227195441.5170-2-jean-philippe.brucker@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170227195441.5170-1-jean-philippe.brucker@arm.com> References: <20170227195441.5170-1-jean-philippe.brucker@arm.com> To: unlisted-recipients:; (no To-header on input) Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Reintroduce smmu_group. This structure was removed during the generic DT bindings rework, but will be needed when implementing PCIe ATS, to lookup devices attached to a given domain. When unmapping from a domain, we need to send an invalidation to all devices that could have stored the mapping in their ATC. It would be nice to use IOMMU API's iommu_group_for_each_dev, but that list is protected by group->mutex, which we can't use because atc_invalidate won't be allowed to sleep. So add a list of devices, protected by a spinlock. Signed-off-by: Jean-Philippe Brucker --- drivers/iommu/arm-smmu-v3.c | 74 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 74 insertions(+) diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index 5806a6acc94e..ce8b68fe254b 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -625,6 +625,9 @@ struct arm_smmu_device { struct arm_smmu_master_data { struct arm_smmu_device *smmu; struct arm_smmu_strtab_ent ste; + + struct device *dev; + struct list_head group_head; }; /* SMMU private data for an IOMMU domain */ @@ -650,6 +653,11 @@ struct arm_smmu_domain { struct iommu_domain domain; }; +struct arm_smmu_group { + struct list_head devices; + spinlock_t devices_lock; +}; + struct arm_smmu_option_prop { u32 opt; const char *prop; @@ -665,6 +673,8 @@ static struct arm_smmu_domain *to_smmu_domain(struct iommu_domain *dom) return container_of(dom, struct arm_smmu_domain, domain); } +#define to_smmu_group iommu_group_get_iommudata + static void parse_driver_options(struct arm_smmu_device *smmu) { int i = 0; @@ -1595,6 +1605,30 @@ static int arm_smmu_install_ste_for_dev(struct iommu_fwspec *fwspec) return 0; } +static void arm_smmu_group_release(void *smmu_group) +{ + kfree(smmu_group); +} + +static struct arm_smmu_group *arm_smmu_group_alloc(struct iommu_group *group) +{ + struct arm_smmu_group *smmu_group = to_smmu_group(group); + + if (smmu_group) + return smmu_group; + + smmu_group = kzalloc(sizeof(*smmu_group), GFP_KERNEL); + if (!smmu_group) + return NULL; + + INIT_LIST_HEAD(&smmu_group->devices); + spin_lock_init(&smmu_group->devices_lock); + + iommu_group_set_iommudata(group, smmu_group, arm_smmu_group_release); + + return smmu_group; +} + static void arm_smmu_detach_dev(struct device *dev) { struct arm_smmu_master_data *master = dev->iommu_fwspec->iommu_priv; @@ -1607,7 +1641,9 @@ static void arm_smmu_detach_dev(struct device *dev) static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) { int ret = 0; + struct iommu_group *group; struct arm_smmu_device *smmu; + struct arm_smmu_group *smmu_group; struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); struct arm_smmu_master_data *master; struct arm_smmu_strtab_ent *ste; @@ -1619,6 +1655,17 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) smmu = master->smmu; ste = &master->ste; + /* + * When adding devices, this is the first occasion we have to create the + * smmu_group and attach it to iommu_group. + */ + group = iommu_group_get(dev); + smmu_group = arm_smmu_group_alloc(group); + if (!smmu_group) { + iommu_group_put(group); + return -ENOMEM; + } + /* Already attached to a different domain? */ if (!ste->bypass) arm_smmu_detach_dev(dev); @@ -1659,6 +1706,9 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) out_unlock: mutex_unlock(&smmu_domain->init_mutex); + + iommu_group_put(group); + return ret; } @@ -1745,7 +1795,9 @@ static struct iommu_ops arm_smmu_ops; static int arm_smmu_add_device(struct device *dev) { int i, ret; + unsigned long flags; struct arm_smmu_device *smmu; + struct arm_smmu_group *smmu_group; struct arm_smmu_master_data *master; struct iommu_fwspec *fwspec = dev->iommu_fwspec; struct iommu_group *group; @@ -1769,6 +1821,7 @@ static int arm_smmu_add_device(struct device *dev) return -ENOMEM; master->smmu = smmu; + master->dev = dev; fwspec->iommu_priv = master; } @@ -1789,6 +1842,12 @@ static int arm_smmu_add_device(struct device *dev) group = iommu_group_get_for_dev(dev); if (!IS_ERR(group)) { + smmu_group = to_smmu_group(group); + + spin_lock_irqsave(&smmu_group->devices_lock, flags); + list_add(&master->group_head, &smmu_group->devices); + spin_unlock_irqrestore(&smmu_group->devices_lock, flags); + iommu_group_put(group); iommu_device_link(&smmu->iommu, dev); } @@ -1800,7 +1859,10 @@ static void arm_smmu_remove_device(struct device *dev) { struct iommu_fwspec *fwspec = dev->iommu_fwspec; struct arm_smmu_master_data *master; + struct arm_smmu_group *smmu_group; struct arm_smmu_device *smmu; + struct iommu_group *group; + unsigned long flags; if (!fwspec || fwspec->ops != &arm_smmu_ops) return; @@ -1809,6 +1871,18 @@ static void arm_smmu_remove_device(struct device *dev) smmu = master->smmu; if (master && master->ste.valid) arm_smmu_detach_dev(dev); + + if (master) { + group = iommu_group_get(dev); + smmu_group = to_smmu_group(group); + + spin_lock_irqsave(&smmu_group->devices_lock, flags); + list_del(&master->group_head); + spin_unlock_irqrestore(&smmu_group->devices_lock, flags); + + iommu_group_put(group); + } + iommu_group_remove_device(dev); iommu_device_unlink(&smmu->iommu, dev); kfree(master);