From patchwork Thu Jun 20 10:58:44 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hiroshi DOYU X-Patchwork-Id: 2754681 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 7E421C0AB1 for ; Thu, 20 Jun 2013 11:03:53 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 36F2D20502 for ; Thu, 20 Jun 2013 11:03:52 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id ED553204FF for ; Thu, 20 Jun 2013 11:03:46 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1Upcbp-0006I1-J4; Thu, 20 Jun 2013 11:00:59 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Upcb8-0002ws-7P; Thu, 20 Jun 2013 11:00:14 +0000 Received: from hqemgate03.nvidia.com ([216.228.121.140]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1UpcaY-0002s0-V0 for linux-arm-kernel@lists.infradead.org; Thu, 20 Jun 2013 10:59:40 +0000 Received: from hqnvupgp08.nvidia.com (Not Verified[216.228.121.13]) by hqemgate03.nvidia.com id ; Thu, 20 Jun 2013 04:06:22 -0700 Received: from hqemhub03.nvidia.com ([172.20.12.94]) by hqnvupgp08.nvidia.com (PGP Universal service); Thu, 20 Jun 2013 03:59:12 -0700 X-PGP-Universal: processed; by hqnvupgp08.nvidia.com on Thu, 20 Jun 2013 03:59:12 -0700 Received: from hqnvemgw02.nvidia.com (172.16.227.111) by HQEMHUB03.nvidia.com (172.20.150.15) with Microsoft SMTP Server id 8.3.298.1; Thu, 20 Jun 2013 03:59:12 -0700 Received: from thelma.nvidia.com (Not Verified[172.16.212.77]) by hqnvemgw02.nvidia.com with MailMarshal (v7,1,2,5326) id ; Thu, 20 Jun 2013 03:59:12 -0700 Received: from oreo.Nvidia.com (dhcp-10-21-26-134.nvidia.com [10.21.26.134]) by thelma.nvidia.com (8.13.8+Sun/8.8.8) with ESMTP id r5KAx4pm023091; Thu, 20 Jun 2013 03:59:10 -0700 (PDT) From: Hiroshi Doyu To: Subject: [PATCH 3/3] iommu/tegra: smmu: Support read-only mapping Date: Thu, 20 Jun 2013 13:58:44 +0300 Message-ID: <1371725924-13957-4-git-send-email-hdoyu@nvidia.com> X-Mailer: git-send-email 1.8.1.5 MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20130620_065939_183161_5AD42DD7 X-CRM114-Status: GOOD ( 10.53 ) X-Spam-Score: -8.2 (--------) Cc: linux-tegra@vger.kernel.org, linaro-mm-sig@lists.linaro.org, iommu@lists.linux-foundation.org, Hiroshi Doyu X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-5.5 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Support read-only mapping via struct dma_attrs. Signed-off-by: Hiroshi Doyu --- drivers/iommu/tegra-smmu.c | 41 +++++++++++++++++++++++++++++------------ 1 file changed, 29 insertions(+), 12 deletions(-) diff --git a/drivers/iommu/tegra-smmu.c b/drivers/iommu/tegra-smmu.c index fab1f19..3aff4cd 100644 --- a/drivers/iommu/tegra-smmu.c +++ b/drivers/iommu/tegra-smmu.c @@ -862,12 +862,13 @@ static size_t __smmu_iommu_unmap_largepage(struct smmu_as *as, dma_addr_t iova) } static int __smmu_iommu_map_pfn(struct smmu_as *as, dma_addr_t iova, - unsigned long pfn) + unsigned long pfn, int prot) { struct smmu_device *smmu = as->smmu; unsigned long *pte; unsigned int *count; struct page *page; + int attrs = as->pte_attr; pte = locate_pte(as, iova, true, &page, &count); if (WARN_ON(!pte)) @@ -875,7 +876,11 @@ static int __smmu_iommu_map_pfn(struct smmu_as *as, dma_addr_t iova, if (*pte == _PTE_VACANT(iova)) (*count)++; - *pte = SMMU_PFN_TO_PTE(pfn, as->pte_attr); + + if (dma_get_attr(DMA_ATTR_READ_ONLY, (struct dma_attrs *)prot)) + attrs &= ~_WRITABLE; + + *pte = SMMU_PFN_TO_PTE(pfn, attrs); FLUSH_CPU_DCACHE(pte, page, sizeof(*pte)); flush_ptc_and_tlb(smmu, as, iova, pte, page, 0); put_signature(as, iova, pfn); @@ -883,23 +888,27 @@ static int __smmu_iommu_map_pfn(struct smmu_as *as, dma_addr_t iova, } static int __smmu_iommu_map_page(struct smmu_as *as, dma_addr_t iova, - phys_addr_t pa) + phys_addr_t pa, int prot) { unsigned long pfn = __phys_to_pfn(pa); - return __smmu_iommu_map_pfn(as, iova, pfn); + return __smmu_iommu_map_pfn(as, iova, pfn, prot); } static int __smmu_iommu_map_largepage(struct smmu_as *as, dma_addr_t iova, - phys_addr_t pa) + phys_addr_t pa, int prot) { unsigned long pdn = SMMU_ADDR_TO_PDN(iova); unsigned long *pdir = (unsigned long *)page_address(as->pdir_page); + int attrs = _PDE_ATTR; if (pdir[pdn] != _PDE_VACANT(pdn)) return -EINVAL; - pdir[pdn] = SMMU_ADDR_TO_PDN(pa) << 10 | _PDE_ATTR; + if (dma_get_attr(DMA_ATTR_READ_ONLY, (struct dma_attrs *)prot)) + attrs &= ~_WRITABLE; + + pdir[pdn] = SMMU_ADDR_TO_PDN(pa) << 10 | attrs; FLUSH_CPU_DCACHE(&pdir[pdn], as->pdir_page, sizeof pdir[pdn]); flush_ptc_and_tlb(as->smmu, as, iova, &pdir[pdn], as->pdir_page, 1); @@ -912,7 +921,8 @@ static int smmu_iommu_map(struct iommu_domain *domain, unsigned long iova, struct smmu_as *as = domain->priv; unsigned long flags; int err; - int (*fn)(struct smmu_as *as, dma_addr_t iova, phys_addr_t pa); + int (*fn)(struct smmu_as *as, dma_addr_t iova, phys_addr_t pa, + int prot); dev_dbg(as->smmu->dev, "[%d] %08lx:%08x\n", as->asid, iova, pa); @@ -929,7 +939,7 @@ static int smmu_iommu_map(struct iommu_domain *domain, unsigned long iova, } spin_lock_irqsave(&as->lock, flags); - err = fn(as, iova, pa); + err = fn(as, iova, pa, prot); spin_unlock_irqrestore(&as->lock, flags); return err; } @@ -943,6 +953,10 @@ static int smmu_iommu_map_pages(struct iommu_domain *domain, unsigned long iova, unsigned long *pdir = page_address(as->pdir_page); int err = 0; bool flush_all = (total > SZ_512) ? true : false; + int attrs = as->pte_attr; + + if (dma_get_attr(DMA_ATTR_READ_ONLY, (struct dma_attrs *)prot)) + attrs &= ~_WRITABLE; spin_lock_irqsave(&as->lock, flags); @@ -977,8 +991,7 @@ static int smmu_iommu_map_pages(struct iommu_domain *domain, unsigned long iova, if (*pte == _PTE_VACANT(iova + i * PAGE_SIZE)) (*rest)++; - *pte = SMMU_PFN_TO_PTE(page_to_pfn(pages[i]), - as->pte_attr); + *pte = SMMU_PFN_TO_PTE(page_to_pfn(pages[i]), attrs); } pte = &ptbl[ptn]; @@ -1010,6 +1023,10 @@ static int smmu_iommu_map_sg(struct iommu_domain *domain, unsigned long iova, bool flush_all = (nents * PAGE_SIZE > SZ_512) ? true : false; struct smmu_as *as = domain->priv; struct smmu_device *smmu = as->smmu; + int attrs = as->pte_attr; + + if (dma_get_attr(DMA_ATTR_READ_ONLY, (struct dma_attrs *)prot)) + attrs &= ~_WRITABLE; for (count = 0, s = sgl; count < nents; s = sg_next(s)) { phys_addr_t phys = page_to_phys(sg_page(s)); @@ -1053,7 +1070,7 @@ static int smmu_iommu_map_sg(struct iommu_domain *domain, unsigned long iova, (*rest)++; } - *pte = SMMU_PFN_TO_PTE(pfn + i, as->pte_attr); + *pte = SMMU_PFN_TO_PTE(pfn + i, attrs); } pte = &ptbl[ptn]; @@ -1191,7 +1208,7 @@ static int smmu_iommu_attach_dev(struct iommu_domain *domain, struct page *page; page = as->smmu->avp_vector_page; - __smmu_iommu_map_pfn(as, 0, page_to_pfn(page)); + __smmu_iommu_map_pfn(as, 0, page_to_pfn(page), 0); pr_debug("Reserve \"page zero\" \ for AVP vectors using a common dummy\n");