From patchwork Mon Apr 4 08:07:02 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 8738291 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id A25619F39A for ; Mon, 4 Apr 2016 08:10:52 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 391E52022D for ; Mon, 4 Apr 2016 08:10:51 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 47E0D2022A for ; Mon, 4 Apr 2016 08:10:50 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1amzZZ-0002qE-9K; Mon, 04 Apr 2016 08:09:21 +0000 Received: from mail-lf0-x232.google.com ([2a00:1450:4010:c07::232]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1amzY9-0001IE-A2 for linux-arm-kernel@lists.infradead.org; Mon, 04 Apr 2016 08:07:56 +0000 Received: by mail-lf0-x232.google.com with SMTP id c62so157250103lfc.1 for ; Mon, 04 Apr 2016 01:07:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=GQcH5kWyDGeQCQoZETehCdUvAPKCUMPVJZrKIRdBYn0=; b=Ibvcp63nqaDWBXExchAb88q2ugeLpncmEloZJ3Ve1awKtWwEBbFyKlRMEUWBFn5ehZ zd0/48Azx5Ut7kIChYNFvTL2fbhz0oa1u0WmsM4V1azIUjI+IiJWTY/gdNKoWjL4a54P Kdt+cz5zDlovjk2wMXSsGDE40TawweZ+uiPl8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=GQcH5kWyDGeQCQoZETehCdUvAPKCUMPVJZrKIRdBYn0=; b=etT0r1EBm7B1a3KD5dJWecXgyEmsgN3X+R5+RxPDMn7faP1lkdRT0t4Kqt8YYzsf9v N7eSs7n/qpgM0/MXm4v6L9mhzDUTdFdLJQUqzVZF3F4DUzvAcvuNMN2gbRLouBzCu4A0 zu7wvUBaTkQvVQCZia9nYDKET56ymf2Cw9QLrCMtq6qik95eUBfVOVcT1KyIRJj/IHF3 MwLIZy5OW5Z4Y3f3Ho2DrqF+kw94jMDgfCnkW3DaqzrY3n/GY9Tk7pEe/SqVF7nBUfGF q+/YWVtVg2P5FLtqKoiXhYZObu36cShQG03HZt+LPwilcUS0TIxHSuQYg7u3/YZjcIgh 2ozg== X-Gm-Message-State: AD7BkJIQHmyy+bOunny89/B5JjplaP0IKe5BEiC18OUwvHNUYTXQUmyChIS5Zid2waK/TwVL X-Received: by 10.194.89.38 with SMTP id bl6mr12904272wjb.44.1459757252995; Mon, 04 Apr 2016 01:07:32 -0700 (PDT) Received: from new-host-2.home (LMontsouris-657-1-37-90.w80-11.abo.wanadoo.fr. [80.11.198.90]) by smtp.gmail.com with ESMTPSA id m67sm7505239wma.3.2016.04.04.01.07.29 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 04 Apr 2016 01:07:31 -0700 (PDT) From: Eric Auger To: eric.auger@st.com, eric.auger@linaro.org, robin.murphy@arm.com, alex.williamson@redhat.com, will.deacon@arm.com, joro@8bytes.org, tglx@linutronix.de, jason@lakedaemon.net, marc.zyngier@arm.com, christoffer.dall@linaro.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH v6 7/7] dma-reserved-iommu: iommu_unmap_reserved Date: Mon, 4 Apr 2016 08:07:02 +0000 Message-Id: <1459757222-2668-8-git-send-email-eric.auger@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1459757222-2668-1-git-send-email-eric.auger@linaro.org> References: <1459757222-2668-1-git-send-email-eric.auger@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160404_010753_773981_655C87DD X-CRM114-Status: GOOD ( 12.05 ) X-Spam-Score: -2.7 (--) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: julien.grall@arm.com, patches@linaro.org, Jean-Philippe.Brucker@arm.com, Manish.Jaggi@caviumnetworks.com, p.fedin@samsung.com, linux-kernel@vger.kernel.org, Bharat.Bhushan@freescale.com, iommu@lists.linux-foundation.org, pranav.sawargaonkar@gmail.com, suravee.suthikulpanit@amd.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-5.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Introduce a new function whose role is to unmap all allocated reserved IOVAs and free the reserved iova domain Signed-off-by: Eric Auger --- v5 -> v6: - use spin_lock instead of mutex v3 -> v4: - previously "iommu/arm-smmu: relinquish reserved resources on domain deletion" --- drivers/iommu/dma-reserved-iommu.c | 45 ++++++++++++++++++++++++++++++++++---- include/linux/dma-reserved-iommu.h | 7 ++++++ 2 files changed, 48 insertions(+), 4 deletions(-) diff --git a/drivers/iommu/dma-reserved-iommu.c b/drivers/iommu/dma-reserved-iommu.c index 3c759d9..c06c39e 100644 --- a/drivers/iommu/dma-reserved-iommu.c +++ b/drivers/iommu/dma-reserved-iommu.c @@ -119,20 +119,24 @@ unlock: } EXPORT_SYMBOL_GPL(iommu_alloc_reserved_iova_domain); -void iommu_free_reserved_iova_domain(struct iommu_domain *domain) +void __iommu_free_reserved_iova_domain(struct iommu_domain *domain) { struct iova_domain *iovad = (struct iova_domain *)domain->reserved_iova_cookie; - unsigned long flags; if (!iovad) return; - spin_lock_irqsave(&domain->reserved_lock, flags); - put_iova_domain(iovad); kfree(iovad); +} + +void iommu_free_reserved_iova_domain(struct iommu_domain *domain) +{ + unsigned long flags; + spin_lock_irqsave(&domain->reserved_lock, flags); + __iommu_free_reserved_iova_domain(domain); spin_unlock_irqrestore(&domain->reserved_lock, flags); } EXPORT_SYMBOL_GPL(iommu_free_reserved_iova_domain); @@ -281,4 +285,37 @@ unlock: EXPORT_SYMBOL_GPL(iommu_put_reserved_iova); +static void reserved_binding_release(struct kref *kref) +{ + struct iommu_reserved_binding *b = + container_of(kref, struct iommu_reserved_binding, kref); + struct iommu_domain *d = b->domain; + + delete_reserved_binding(d, b); +} + +void iommu_unmap_reserved(struct iommu_domain *domain) +{ + struct rb_node *node; + unsigned long flags; + + spin_lock_irqsave(&domain->reserved_lock, flags); + while ((node = rb_first(&domain->reserved_binding_list))) { + struct iommu_reserved_binding *b = + rb_entry(node, struct iommu_reserved_binding, node); + + unlink_reserved_binding(domain, b); + spin_unlock_irqrestore(&domain->reserved_lock, flags); + + while (!kref_put(&b->kref, reserved_binding_release)) + ; + spin_lock_irqsave(&domain->reserved_lock, flags); + } + domain->reserved_binding_list = RB_ROOT; + __iommu_free_reserved_iova_domain(domain); + spin_unlock_irqrestore(&domain->reserved_lock, flags); +} +EXPORT_SYMBOL_GPL(iommu_unmap_reserved); + + diff --git a/include/linux/dma-reserved-iommu.h b/include/linux/dma-reserved-iommu.h index dedea56..9fba930 100644 --- a/include/linux/dma-reserved-iommu.h +++ b/include/linux/dma-reserved-iommu.h @@ -68,6 +68,13 @@ int iommu_get_reserved_iova(struct iommu_domain *domain, */ void iommu_put_reserved_iova(struct iommu_domain *domain, dma_addr_t iova); +/** + * iommu_unmap_reserved: unmap & destroy the reserved iova bindings + * + * @domain: iommu domain handle + */ +void iommu_unmap_reserved(struct iommu_domain *domain); + #endif /* CONFIG_IOMMU_DMA_RESERVED */ #endif /* __KERNEL__ */ #endif /* __DMA_RESERVED_IOMMU_H */