From patchwork Thu Jun 20 08:24:39 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hiroshi DOYU X-Patchwork-Id: 2753731 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id DF5599F39E for ; Thu, 20 Jun 2013 08:26:17 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 08BF4202CA for ; Thu, 20 Jun 2013 08:26:17 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2F218202A7 for ; Thu, 20 Jun 2013 08:26:12 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1UpaBZ-0002wx-Nw; Thu, 20 Jun 2013 08:25:42 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1UpaBP-0006yW-GQ; Thu, 20 Jun 2013 08:25:31 +0000 Received: from hqemgate04.nvidia.com ([216.228.121.35]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1UpaB9-0006w1-5N for linux-arm-kernel@lists.infradead.org; Thu, 20 Jun 2013 08:25:15 +0000 Received: from hqnvupgp07.nvidia.com (Not Verified[216.228.121.13]) by hqemgate04.nvidia.com id ; Thu, 20 Jun 2013 01:24:59 -0700 Received: from hqemhub01.nvidia.com ([172.20.12.94]) by hqnvupgp07.nvidia.com (PGP Universal service); Thu, 20 Jun 2013 01:23:05 -0700 X-PGP-Universal: processed; by hqnvupgp07.nvidia.com on Thu, 20 Jun 2013 01:23:05 -0700 Received: from deemhub02.nvidia.com (10.21.69.138) by hqemhub01.nvidia.com (172.20.150.30) with Microsoft SMTP Server (TLS) id 8.3.298.1; Thu, 20 Jun 2013 01:24:44 -0700 Received: from DEMAIL01.nvidia.com ([10.21.69.139]) by deemhub02.nvidia.com ([10.21.69.138]) with mapi; Thu, 20 Jun 2013 10:24:42 +0200 From: Hiroshi Doyu To: "nishanth.p@gmail.com" Date: Thu, 20 Jun 2013 10:24:39 +0200 Subject: Re: [Linaro-mm-sig] [RFC 2/3] ARM: dma-mapping: Pass DMA attrs as IOMMU prot Thread-Topic: [Linaro-mm-sig] [RFC 2/3] ARM: dma-mapping: Pass DMA attrs as IOMMU prot Thread-Index: Ac5tj5zNdjJRDV6XQ8usLBmDlgEYuQ== Message-ID: <20130620.112439.1330557591655135630.hdoyu@nvidia.com> References: <1371707384-30037-1-git-send-email-hdoyu@nvidia.com><1371707384-30037-3-git-send-email-hdoyu@nvidia.com> In-Reply-To: Accept-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-nvconfidentiality: public acceptlanguage: en-US MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20130620_042515_426419_D84F5C29 X-CRM114-Status: GOOD ( 10.85 ) X-Spam-Score: -8.2 (--------) Cc: "linux-tegra@vger.kernel.org" , "linaro-mm-sig@lists.linaro.org" , "iommu@lists.linux-foundation.org" , "linux-arm-kernel@lists.infradead.org" , "m.szyprowski@samsung.com" X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-5.5 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Hi Nishanth, Nishanth Peethambaran wrote @ Thu, 20 Jun 2013 10:07:00 +0200: > It would be better to define a prot flag bit in iommu API and convert > the attrs to prot flag bit in dma-mapping aPI before calling the iommu > API. That's the 1st option. > On Thu, Jun 20, 2013 at 11:19 AM, Hiroshi Doyu wrote: .... > > @@ -1280,7 +1281,7 @@ ____iommu_create_mapping(struct device *dev, dma_addr_t *req, > > break; > > > > len = (j - i) << PAGE_SHIFT; > > - ret = iommu_map(mapping->domain, iova, phys, len, 0); > > + ret = iommu_map(mapping->domain, iova, phys, len, (int)attrs); > > Use dma_get_attr and translate the READ_ONLY attr to a new READ_ONLY > prot flag bit which needs to be defined in iommu.h Both DMA_ATTR_READ_ONLY and IOMMU_READ are just logical bit in their layers respectively and eventually it's converted to H/W dependent bit. If IOMMU is considered as one of specific case of DMA, sharing dma_attr between IOMMU and DMA may not be so bad. IIRC, ARM: dma-mapping API was implemented based on this concept(?). diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index d8f98b1..161a1b0 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -755,7 +755,7 @@ int iommu_domain_has_cap(struct iommu_domain *domain, EXPORT_SYMBOL_GPL(iommu_domain_has_cap); int iommu_map(struct iommu_domain *domain, unsigned long iova, - phys_addr_t paddr, size_t size, int prot) + phys_addr_t paddr, size_t size, struct dma_attr *attrs) { unsigned long orig_iova = iova; unsigned int min_pagesz;